Nov 24 11:55:49 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 11:55:49 crc restorecon[4578]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:49 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:55:50 crc restorecon[4578]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 11:55:51 crc kubenswrapper[4782]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.248603 4782 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254612 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254641 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254647 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254652 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254660 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254668 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254675 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254682 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254690 4782 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254696 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254702 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254707 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254711 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254717 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254729 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254734 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254739 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254744 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254749 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254756 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254763 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254769 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254786 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254792 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254797 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254802 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254807 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254812 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254817 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254821 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254826 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254831 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254836 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254840 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254845 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254850 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254855 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254860 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254864 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254869 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254875 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254881 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254886 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254890 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254895 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254901 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254905 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254910 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254915 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254920 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254924 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254929 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254934 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254939 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254944 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254949 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254954 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254959 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254966 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254972 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254977 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254983 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254987 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254992 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.254997 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255001 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255006 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255011 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255016 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255020 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.255025 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256816 4782 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256842 4782 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256854 4782 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256862 4782 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256869 4782 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256875 4782 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256883 4782 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256890 4782 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256896 4782 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256902 4782 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256908 4782 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256914 4782 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256920 4782 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256925 4782 flags.go:64] FLAG: --cgroup-root="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256931 4782 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256940 4782 flags.go:64] FLAG: --client-ca-file="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256947 4782 flags.go:64] FLAG: --cloud-config="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256954 4782 flags.go:64] FLAG: --cloud-provider="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256961 4782 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256970 4782 flags.go:64] FLAG: --cluster-domain="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256977 4782 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256984 4782 flags.go:64] FLAG: --config-dir="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256992 4782 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.256999 4782 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257006 4782 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257012 4782 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257018 4782 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257024 4782 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257030 4782 flags.go:64] FLAG: --contention-profiling="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257035 4782 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257047 4782 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257054 4782 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257060 4782 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257067 4782 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257073 4782 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257078 4782 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257084 4782 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257091 4782 flags.go:64] FLAG: --enable-server="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257096 4782 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257105 4782 flags.go:64] FLAG: --event-burst="100" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257110 4782 flags.go:64] FLAG: --event-qps="50" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257116 4782 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257122 4782 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257128 4782 flags.go:64] FLAG: --eviction-hard="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257134 4782 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257140 4782 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257146 4782 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257154 4782 flags.go:64] FLAG: --eviction-soft="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257161 4782 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257168 4782 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257176 4782 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257183 4782 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257189 4782 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257196 4782 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257202 4782 flags.go:64] FLAG: --feature-gates="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257218 4782 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257226 4782 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257233 4782 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257240 4782 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257248 4782 flags.go:64] FLAG: --healthz-port="10248" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257255 4782 flags.go:64] FLAG: --help="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257262 4782 flags.go:64] FLAG: --hostname-override="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257269 4782 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257277 4782 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257285 4782 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257292 4782 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257299 4782 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257306 4782 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257313 4782 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257321 4782 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257327 4782 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257334 4782 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257341 4782 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257350 4782 flags.go:64] FLAG: --kube-reserved="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257356 4782 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257363 4782 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257391 4782 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257399 4782 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257407 4782 flags.go:64] FLAG: --lock-file="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257414 4782 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257421 4782 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257464 4782 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257476 4782 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257483 4782 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257490 4782 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257497 4782 flags.go:64] FLAG: --logging-format="text" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257504 4782 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257512 4782 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257542 4782 flags.go:64] FLAG: --manifest-url="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257550 4782 flags.go:64] FLAG: --manifest-url-header="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257560 4782 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257569 4782 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257578 4782 flags.go:64] FLAG: --max-pods="110" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257586 4782 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257594 4782 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257601 4782 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257610 4782 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257618 4782 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257625 4782 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257633 4782 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257650 4782 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257657 4782 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257664 4782 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257672 4782 flags.go:64] FLAG: --pod-cidr="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257678 4782 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257693 4782 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257699 4782 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257707 4782 flags.go:64] FLAG: --pods-per-core="0" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257714 4782 flags.go:64] FLAG: --port="10250" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257723 4782 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257731 4782 flags.go:64] FLAG: --provider-id="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257738 4782 flags.go:64] FLAG: --qos-reserved="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257744 4782 flags.go:64] FLAG: --read-only-port="10255" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257751 4782 flags.go:64] FLAG: --register-node="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257758 4782 flags.go:64] FLAG: --register-schedulable="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257766 4782 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257778 4782 flags.go:64] FLAG: --registry-burst="10" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257785 4782 flags.go:64] FLAG: --registry-qps="5" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257792 4782 flags.go:64] FLAG: --reserved-cpus="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257799 4782 flags.go:64] FLAG: --reserved-memory="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257810 4782 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257818 4782 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257826 4782 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257833 4782 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257841 4782 flags.go:64] FLAG: --runonce="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257848 4782 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257856 4782 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257864 4782 flags.go:64] FLAG: --seccomp-default="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257872 4782 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257880 4782 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257888 4782 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257896 4782 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257903 4782 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257911 4782 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257918 4782 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257925 4782 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257932 4782 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257939 4782 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257946 4782 flags.go:64] FLAG: --system-cgroups="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257953 4782 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257966 4782 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257973 4782 flags.go:64] FLAG: --tls-cert-file="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257981 4782 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257991 4782 flags.go:64] FLAG: --tls-min-version="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.257998 4782 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258005 4782 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258013 4782 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258020 4782 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258027 4782 flags.go:64] FLAG: --v="2" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258036 4782 flags.go:64] FLAG: --version="false" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258045 4782 flags.go:64] FLAG: --vmodule="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258054 4782 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258062 4782 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258232 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258241 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258249 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258255 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258262 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258267 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258274 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258280 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258287 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258296 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258304 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258313 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258320 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258327 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258334 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258341 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258353 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258360 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258366 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258400 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258408 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258414 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258421 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258426 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258432 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258439 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258445 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258451 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258459 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258464 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258471 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258477 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258482 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258488 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258494 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258503 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258511 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258518 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258525 4782 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258532 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258539 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258546 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258552 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258558 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258565 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258571 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258577 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258583 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258592 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258598 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258604 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258611 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258616 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258624 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258630 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258635 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258642 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258648 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258653 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258659 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258665 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258671 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258678 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258686 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258694 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258700 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258705 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258711 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258717 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258723 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.258729 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.258747 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.273141 4782 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.273217 4782 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273353 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273437 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273450 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273460 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273469 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273477 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273485 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273493 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273501 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273509 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273517 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273526 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273535 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273544 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273553 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273560 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273568 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273580 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273590 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273598 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273605 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273613 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273621 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273629 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273636 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273644 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273652 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273660 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273668 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273677 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273684 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273692 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273700 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273708 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273718 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273727 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273735 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273743 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273750 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273758 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273767 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273778 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273791 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273800 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273808 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273818 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273826 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273834 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273842 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273849 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273857 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273866 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273874 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273883 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273892 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273900 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273908 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273916 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273924 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273932 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273942 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273951 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273960 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273968 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273976 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273985 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.273993 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274003 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274012 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274021 4782 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274031 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.274046 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274327 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274343 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274353 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274363 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274418 4782 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274429 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274437 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274446 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274454 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274463 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274470 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274479 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274486 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274497 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274509 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274517 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274525 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274533 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274540 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274548 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274555 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274563 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274571 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274579 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274589 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274598 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274606 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274616 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274624 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274631 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274639 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274647 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274654 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274662 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274671 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274679 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274686 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274694 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274701 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274709 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274716 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274724 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274731 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274739 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274749 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274759 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274769 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274777 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274785 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274793 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274801 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274808 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274815 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274823 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274831 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274842 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274851 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274860 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274868 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274878 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274886 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274894 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274902 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274910 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274919 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274927 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274935 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274943 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274950 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274958 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.274966 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.274978 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.275299 4782 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.281362 4782 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.281578 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.283654 4782 server.go:997] "Starting client certificate rotation" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.283705 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.284601 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-07 11:54:25.016370913 +0000 UTC Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.284746 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 311h58m33.731628262s for next certificate rotation Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.318049 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.320243 4782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.337728 4782 log.go:25] "Validated CRI v1 runtime API" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.383535 4782 log.go:25] "Validated CRI v1 image API" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.386561 4782 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.392187 4782 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-11-50-41-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.392220 4782 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.409140 4782 manager.go:217] Machine: {Timestamp:2025-11-24 11:55:51.40614521 +0000 UTC m=+0.649979029 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:71fe61e6-e0a5-43ad-b8ba-cb806e0524a7 BootID:b441a7b0-c6b8-4deb-981a-b7ea6afe0bee Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:47:38:23 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:47:38:23 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b2:8e:31 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:40:17:cf Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9a:00:57 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:53:ca:97 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:a7:c0:ac:bc:53 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:24:f8:15:fa:64 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.409518 4782 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.409664 4782 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.411508 4782 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.411695 4782 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.411735 4782 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412151 4782 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412163 4782 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412618 4782 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412661 4782 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412861 4782 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.412960 4782 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.416473 4782 kubelet.go:418] "Attempting to sync node with API server" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.416501 4782 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.416519 4782 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.416535 4782 kubelet.go:324] "Adding apiserver pod source" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.416548 4782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.420645 4782 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.421485 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.423085 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.423141 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.423224 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.423226 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.424238 4782 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425654 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425684 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425693 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425702 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425714 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425723 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425731 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425744 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425754 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425766 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425784 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.425792 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.427463 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.430086 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.430457 4782 server.go:1280] "Started kubelet" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.431796 4782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 11:55:51 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.431948 4782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.432255 4782 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.435262 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.435288 4782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.435330 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:16:44.119434059 +0000 UTC Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.435367 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 394h20m52.684070746s for next certificate rotation Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.436098 4782 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.436118 4782 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.437348 4782 server.go:460] "Adding debug handlers to kubelet server" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.437940 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.438035 4782 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.438807 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="200ms" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.439395 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.439496 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.439850 4782 factory.go:55] Registering systemd factory Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.439936 4782 factory.go:221] Registration of the systemd container factory successfully Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.440413 4782 factory.go:153] Registering CRI-O factory Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.440431 4782 factory.go:221] Registration of the crio container factory successfully Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.440504 4782 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.440548 4782 factory.go:103] Registering Raw factory Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.440578 4782 manager.go:1196] Started watching for new ooms in manager Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.442475 4782 manager.go:319] Starting recovery of all containers Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.447018 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.175:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aef54c5007cb7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:55:51.430405303 +0000 UTC m=+0.674239072,LastTimestamp:2025-11-24 11:55:51.430405303 +0000 UTC m=+0.674239072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456775 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456828 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456843 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456857 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456867 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456879 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456896 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456912 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456932 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456947 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456958 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456974 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.456991 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457009 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457021 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457042 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457054 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457069 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457110 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457123 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457135 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457154 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457170 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457180 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457190 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457199 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457212 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457223 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457233 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457254 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457264 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457274 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457318 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457328 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457338 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457349 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457359 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457386 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457405 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457416 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457426 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457436 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457482 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457492 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457507 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457517 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457527 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457536 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457546 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457574 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457586 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457601 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457632 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457647 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457658 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457674 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457686 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457699 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457711 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457722 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457733 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457746 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457758 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457787 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457802 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457812 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457822 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457834 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457845 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457856 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457869 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457882 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457896 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457907 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457918 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457929 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457941 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457954 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457969 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457983 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.457993 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458006 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458018 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458032 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458043 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458055 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458067 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458080 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458092 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458105 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458127 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458141 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458161 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458182 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458195 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458209 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458227 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458240 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458254 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458267 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458283 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458296 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458308 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458344 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458363 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458511 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458528 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458547 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458567 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458581 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458595 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458610 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458630 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458644 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458656 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458667 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458679 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458691 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458704 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458718 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458733 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458745 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458756 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458769 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458788 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458809 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458833 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458846 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458859 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458873 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458887 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458900 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458914 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458926 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458937 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458950 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458963 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458973 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458982 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.458997 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459008 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459018 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459027 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459037 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459046 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459055 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459064 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459073 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459083 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459092 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459101 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459111 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459120 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459128 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459138 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459147 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459156 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459166 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459174 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459189 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459199 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459208 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459222 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459232 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459240 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459249 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459257 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459271 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459280 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459294 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459303 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459313 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459322 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459336 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459345 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459355 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459364 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459392 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459408 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459422 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459431 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459440 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459449 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459458 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459469 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459483 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459493 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459506 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459516 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459525 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459535 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459545 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459559 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459568 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459578 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459591 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459600 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459610 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.459621 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463725 4782 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463792 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463821 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463844 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463866 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463890 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463903 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463915 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463935 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463952 4782 reconstruct.go:97] "Volume reconstruction finished" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.463961 4782 reconciler.go:26] "Reconciler: start to sync state" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.467270 4782 manager.go:324] Recovery completed Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.480107 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.481792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.481824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.481832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.482700 4782 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.482719 4782 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.482738 4782 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.487619 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.489497 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.489540 4782 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.489570 4782 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.489666 4782 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.490263 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.490309 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.494162 4782 policy_none.go:49] "None policy: Start" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.494729 4782 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.494755 4782 state_mem.go:35] "Initializing new in-memory state store" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.533016 4782 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.538355 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.539502 4782 manager.go:334] "Starting Device Plugin manager" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.539675 4782 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.539695 4782 server.go:79] "Starting device plugin registration server" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.540082 4782 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.540101 4782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.541460 4782 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.541625 4782 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.541830 4782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.548745 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.589811 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.589978 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.591945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.592032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.592055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.592455 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.592623 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.592665 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594000 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594133 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594511 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.594555 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595536 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595539 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595760 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595891 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.595941 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.596823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.596844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.596905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.597061 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.597277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.597345 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.597986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598015 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.598910 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.599898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.599920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.599930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.639638 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="400ms" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.643819 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.645035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.645076 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.645088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.645113 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.645450 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666050 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666129 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666154 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666294 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666337 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666398 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666422 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666453 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666473 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.666512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.767865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768214 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768220 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768258 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768273 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768322 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768346 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768428 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768287 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768626 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768649 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768666 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768685 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768730 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768727 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768737 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768768 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768763 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768822 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.768709 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.845843 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.846945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.846981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.846989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.847012 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:51 crc kubenswrapper[4782]: E1124 11:55:51.847469 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.929632 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.947588 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.963483 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.966237 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a739e130316d1aa74c3795d2f4257fb2a187e759e3cc58eb495d2d2bc4dfac6e WatchSource:0}: Error finding container a739e130316d1aa74c3795d2f4257fb2a187e759e3cc58eb495d2d2bc4dfac6e: Status 404 returned error can't find the container with id a739e130316d1aa74c3795d2f4257fb2a187e759e3cc58eb495d2d2bc4dfac6e Nov 24 11:55:51 crc kubenswrapper[4782]: W1124 11:55:51.980449 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0d64a0c44bdcdf720cc0b2cddd96ce7fd8eb0a0db48395acaa5781b3440c4d15 WatchSource:0}: Error finding container 0d64a0c44bdcdf720cc0b2cddd96ce7fd8eb0a0db48395acaa5781b3440c4d15: Status 404 returned error can't find the container with id 0d64a0c44bdcdf720cc0b2cddd96ce7fd8eb0a0db48395acaa5781b3440c4d15 Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.982497 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:55:51 crc kubenswrapper[4782]: I1124 11:55:51.988558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.040360 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="800ms" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.248227 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.249685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.249718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.249728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.249749 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.250090 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Nov 24 11:55:52 crc kubenswrapper[4782]: W1124 11:55:52.353050 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.353153 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.444352 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:52 crc kubenswrapper[4782]: W1124 11:55:52.444582 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.444679 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.494332 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a739e130316d1aa74c3795d2f4257fb2a187e759e3cc58eb495d2d2bc4dfac6e"} Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.496383 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d61dcbe3b7d384214e1d0f4ae77ea4719d1b1f607a38f412d8265e9c54fcfe3"} Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.498623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"519f8b607323bd3b5719e4fb858c20d5bd933badde98296e2731cf5b75725621"} Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.500699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0d64a0c44bdcdf720cc0b2cddd96ce7fd8eb0a0db48395acaa5781b3440c4d15"} Nov 24 11:55:52 crc kubenswrapper[4782]: I1124 11:55:52.502172 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dacb5863cb2df65ba0b0bdff0165ad69a3165b152489320371eb10a2a8d87b07"} Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.842117 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="1.6s" Nov 24 11:55:52 crc kubenswrapper[4782]: W1124 11:55:52.855470 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.855577 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:52 crc kubenswrapper[4782]: W1124 11:55:52.858641 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:52 crc kubenswrapper[4782]: E1124 11:55:52.858745 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.050629 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.055384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.055422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.055434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.055461 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:53 crc kubenswrapper[4782]: E1124 11:55:53.055980 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.430937 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.506180 4782 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845" exitCode=0 Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.506514 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.506667 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.507896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.508050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.508170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.513283 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.513839 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.513967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.514050 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.514128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.514670 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.514838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.515006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.516098 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a" exitCode=0 Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.516171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.516199 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.521675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.521713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.521728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.524006 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.525766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.526316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.526331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.527995 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d2da89689510f8ff72baf628824fadca2eca790012e735da1e889a924f0cbb28" exitCode=0 Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.528155 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.528173 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d2da89689510f8ff72baf628824fadca2eca790012e735da1e889a924f0cbb28"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.530581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.530746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.530895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.534356 4782 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1651442ece169fd005747748b3513a953fa5f32d64d2df271c25834af8a2b88d" exitCode=0 Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.534653 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.534667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1651442ece169fd005747748b3513a953fa5f32d64d2df271c25834af8a2b88d"} Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.536857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.537002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:53 crc kubenswrapper[4782]: I1124 11:55:53.537158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: W1124 11:55:54.097109 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:54 crc kubenswrapper[4782]: E1124 11:55:54.097213 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.431064 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:54 crc kubenswrapper[4782]: E1124 11:55:54.443137 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="3.2s" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.538984 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b93f99f599eef0f7a4b11408c6ab4763272fe531bf44d971a95c08b6c6a3ed09"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.539114 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540240 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540914 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.540948 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.541014 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.541744 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.541773 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.541786 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.543543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.543573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.543586 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.543599 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.544860 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="adc3bc01d8a6705adb4efe213d6ab6086d0fd883642e04d093248852507134dc" exitCode=0 Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.544932 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.544941 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.544956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"adc3bc01d8a6705adb4efe213d6ab6086d0fd883642e04d093248852507134dc"} Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.545700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: W1124 11:55:54.575653 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Nov 24 11:55:54 crc kubenswrapper[4782]: E1124 11:55:54.575740 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.656816 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.657774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.657800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.657811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:54 crc kubenswrapper[4782]: I1124 11:55:54.657837 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:54 crc kubenswrapper[4782]: E1124 11:55:54.658174 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.146857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.549309 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f50096f49ee51f3401db17a2888175b0c3160311f52ddc6fe147a820a0900a46" exitCode=0 Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.549427 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f50096f49ee51f3401db17a2888175b0c3160311f52ddc6fe147a820a0900a46"} Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.549539 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.550591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.550619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.550628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.558190 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.558236 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.558786 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc"} Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559127 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559890 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559938 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.559985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.560005 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.560921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.560934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.560943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.561010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.561060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:55 crc kubenswrapper[4782]: I1124 11:55:55.561081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563641 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563711 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5cffd0174c3cb04566ae7ef3632855eeb0b56bf622262b8334b4084ebbe2f533"} Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563774 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dccbb475b51bcd69809baef6da0641600566900d54c8853cdc43de74edccd8f0"} Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563797 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c9f862218442ada3ad32c87e985a21e800e6e47b6ffa471ad0a8c1aad2056aa"} Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563820 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"385904c870352642d2f1a474567e23681034463b6e225c1bc2b942cd70a815ab"} Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.563924 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.564537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.564564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.564575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:56 crc kubenswrapper[4782]: I1124 11:55:56.681002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.058028 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.568411 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f1999bb471d3448585066e9aaceae5062b110648389e7f8bd5df5b2bc5a3db4c"} Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.568475 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.568476 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.569355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.858528 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.860531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.860603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.860622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:57 crc kubenswrapper[4782]: I1124 11:55:57.860658 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.570495 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.570540 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.571550 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.571720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.571747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.572517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.572566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:58 crc kubenswrapper[4782]: I1124 11:55:58.572578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:55:59 crc kubenswrapper[4782]: I1124 11:55:59.498633 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:55:59 crc kubenswrapper[4782]: I1124 11:55:59.498797 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:55:59 crc kubenswrapper[4782]: I1124 11:55:59.500212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:55:59 crc kubenswrapper[4782]: I1124 11:55:59.500258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:55:59 crc kubenswrapper[4782]: I1124 11:55:59.500271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.157794 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.157993 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.162871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.162908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.162921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.401536 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.409553 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.575058 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.575872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.575979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:00 crc kubenswrapper[4782]: I1124 11:56:00.576042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:01 crc kubenswrapper[4782]: E1124 11:56:01.548884 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.576584 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.577417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.577456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.577464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.785360 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.785729 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.786783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.786928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:01 crc kubenswrapper[4782]: I1124 11:56:01.786998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:02 crc kubenswrapper[4782]: I1124 11:56:02.534507 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 11:56:02 crc kubenswrapper[4782]: I1124 11:56:02.534681 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:02 crc kubenswrapper[4782]: I1124 11:56:02.535597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:02 crc kubenswrapper[4782]: I1124 11:56:02.535622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:02 crc kubenswrapper[4782]: I1124 11:56:02.535631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:03 crc kubenswrapper[4782]: I1124 11:56:03.158006 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:56:03 crc kubenswrapper[4782]: I1124 11:56:03.158805 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:56:05 crc kubenswrapper[4782]: W1124 11:56:05.305599 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:56:05 crc kubenswrapper[4782]: I1124 11:56:05.305771 4782 trace.go:236] Trace[1142085398]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:55:55.304) (total time: 10001ms): Nov 24 11:56:05 crc kubenswrapper[4782]: Trace[1142085398]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:56:05.305) Nov 24 11:56:05 crc kubenswrapper[4782]: Trace[1142085398]: [10.001224115s] [10.001224115s] END Nov 24 11:56:05 crc kubenswrapper[4782]: E1124 11:56:05.305815 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:56:05 crc kubenswrapper[4782]: I1124 11:56:05.431969 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:56:05 crc kubenswrapper[4782]: W1124 11:56:05.545575 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:56:05 crc kubenswrapper[4782]: I1124 11:56:05.545704 4782 trace.go:236] Trace[950074236]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:55:55.544) (total time: 10001ms): Nov 24 11:56:05 crc kubenswrapper[4782]: Trace[950074236]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:56:05.545) Nov 24 11:56:05 crc kubenswrapper[4782]: Trace[950074236]: [10.001464112s] [10.001464112s] END Nov 24 11:56:05 crc kubenswrapper[4782]: E1124 11:56:05.546123 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.281893 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.281971 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.289074 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.289351 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.690591 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]log ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]etcd ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/priority-and-fairness-filter ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-apiextensions-informers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-apiextensions-controllers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/crd-informer-synced ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-system-namespaces-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 24 11:56:06 crc kubenswrapper[4782]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/bootstrap-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/start-kube-aggregator-informers ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-registration-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-discovery-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]autoregister-completion ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-openapi-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 24 11:56:06 crc kubenswrapper[4782]: livez check failed Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.692540 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.932978 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.933219 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.934742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.934805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.934822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:06 crc kubenswrapper[4782]: I1124 11:56:06.997226 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 11:56:07 crc kubenswrapper[4782]: I1124 11:56:07.545910 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 11:56:07 crc kubenswrapper[4782]: I1124 11:56:07.589350 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:07 crc kubenswrapper[4782]: I1124 11:56:07.590028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:07 crc kubenswrapper[4782]: I1124 11:56:07.590061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:07 crc kubenswrapper[4782]: I1124 11:56:07.590073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:08 crc kubenswrapper[4782]: I1124 11:56:08.592724 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:08 crc kubenswrapper[4782]: I1124 11:56:08.594187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:08 crc kubenswrapper[4782]: I1124 11:56:08.594249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:08 crc kubenswrapper[4782]: I1124 11:56:08.594274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.503591 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.503836 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.505511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.505568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.505586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:09 crc kubenswrapper[4782]: I1124 11:56:09.687860 4782 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.052343 4782 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.285623 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.289783 4782 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.293432 4782 trace.go:236] Trace[1017178071]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:55:59.974) (total time: 11318ms): Nov 24 11:56:11 crc kubenswrapper[4782]: Trace[1017178071]: ---"Objects listed" error: 11318ms (11:56:11.293) Nov 24 11:56:11 crc kubenswrapper[4782]: Trace[1017178071]: [11.318334062s] [11.318334062s] END Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.293479 4782 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.296027 4782 trace.go:236] Trace[500171490]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:55:59.498) (total time: 11797ms): Nov 24 11:56:11 crc kubenswrapper[4782]: Trace[500171490]: ---"Objects listed" error: 11797ms (11:56:11.295) Nov 24 11:56:11 crc kubenswrapper[4782]: Trace[500171490]: [11.797356172s] [11.797356172s] END Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.296071 4782 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.297059 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.505393 4782 apiserver.go:52] "Watching apiserver" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.506991 4782 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507237 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507691 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507740 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507714 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.507900 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.507938 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.508072 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.508135 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.508421 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.509610 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.510006 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511160 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511540 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511634 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511643 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511721 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511935 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.511945 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.539035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.539514 4782 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.564990 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.576632 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591365 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591445 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591476 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591503 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591550 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591575 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591599 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591624 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591649 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591693 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591794 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591817 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591836 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591858 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591876 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591894 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591933 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591955 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591988 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592007 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592026 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592046 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592067 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592090 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592112 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592133 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592155 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592183 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592230 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592251 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592272 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592314 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592338 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592360 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592461 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592485 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592508 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592558 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592603 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592627 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592648 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592669 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592690 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592711 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592756 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592777 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592799 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592838 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592878 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592896 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592957 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592977 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592995 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593034 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593056 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593076 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593102 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593121 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593164 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593185 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593206 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593246 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593264 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593283 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593303 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593323 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593361 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593399 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593478 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593496 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593515 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593552 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593591 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593613 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593709 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593730 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593751 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593773 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593793 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593812 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593853 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593926 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593949 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593994 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594037 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594057 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594100 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594124 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594147 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594191 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594212 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594233 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594275 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594297 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594319 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594340 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594380 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594402 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594422 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594442 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594461 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594481 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594522 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594544 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594570 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594594 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594636 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594657 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594698 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594739 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594821 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594862 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594906 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594974 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595018 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595037 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595060 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595080 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595102 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595122 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595163 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595183 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595229 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595250 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595272 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.591933 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595294 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592056 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592180 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592310 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592538 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.592561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593089 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593125 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593574 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593940 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.593956 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594312 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594570 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594743 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.594916 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595079 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595258 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595181 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595659 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595911 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595942 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596230 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596295 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596356 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596483 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596588 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596684 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596879 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.596926 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597159 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597492 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597559 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597631 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597656 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.597992 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598112 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598112 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598235 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598401 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598416 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598616 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598776 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.598939 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599250 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599411 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599621 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599418 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599922 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600078 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600169 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.599732 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600238 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600503 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600528 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600680 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600854 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600863 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.600842 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.601142 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.601194 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.601251 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.601329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.604959 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.605186 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606251 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606398 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606618 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606639 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.606896 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.607068 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.607550 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.607568 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.607982 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.607983 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.608267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.608516 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.608722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.609003 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.609275 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.609544 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.609729 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610016 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610141 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610396 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610491 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610558 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611307 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611540 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611619 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611661 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611787 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.611939 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612016 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612201 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612304 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612478 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612654 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.612952 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.613045 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.614682 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.615131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.615208 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.615509 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.615779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.610757 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.615986 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.616185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.616241 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.616298 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.616982 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617164 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617022 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617210 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617118 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617484 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617508 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.595317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617728 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.617959 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618256 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618354 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618406 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618486 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618511 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618553 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618677 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618636 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618857 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.618993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619010 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619247 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619404 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc" exitCode=255 Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619422 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619433 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619438 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc"} Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619467 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619493 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619542 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619565 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619589 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619612 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619636 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619660 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619682 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619708 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619753 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619805 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619841 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619908 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.619964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620034 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620060 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620111 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620279 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620295 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620309 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620322 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620333 4782 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620348 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620360 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620392 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620404 4782 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620415 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620426 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620438 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620450 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620462 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620477 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620489 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620502 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620514 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620526 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620538 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620552 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620564 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620575 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620589 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620601 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620613 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620629 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620642 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620654 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620667 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620680 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620692 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620703 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620716 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620729 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620742 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620753 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620765 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620776 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620787 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620800 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620812 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620823 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620835 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620847 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620858 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620873 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620885 4782 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620898 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620911 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620923 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620934 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620946 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620958 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620973 4782 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620987 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.620999 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621011 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621023 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621037 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621049 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621061 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621075 4782 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621087 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621099 4782 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621113 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621125 4782 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621137 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621150 4782 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621162 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621173 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621185 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621198 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621209 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621221 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621233 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621245 4782 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621258 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621269 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621283 4782 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621295 4782 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621306 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621318 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621331 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621343 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621355 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621421 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621437 4782 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621449 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621462 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621472 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621487 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621500 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621511 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621523 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621536 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621548 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621560 4782 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621574 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621588 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621603 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621638 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621653 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621664 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621675 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621690 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621701 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621713 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621724 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621736 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621747 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621759 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621771 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621783 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621796 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621808 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621823 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621835 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621847 4782 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621888 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621900 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621912 4782 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621925 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621937 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621950 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621962 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621974 4782 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.621987 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.622003 4782 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.622084 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.622146 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:12.122124738 +0000 UTC m=+21.365958507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.622853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.622768 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.623335 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.623764 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.624067 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.624663 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.624788 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:12.12477139 +0000 UTC m=+21.368605279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.624875 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.625131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.625539 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.625912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.626267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.626423 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.626546 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.626828 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.627194 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.627353 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.627632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.627975 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.628038 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:12.128020808 +0000 UTC m=+21.371854677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.629181 4782 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.630632 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.630939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.636625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.639601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.639914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.640987 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.641012 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.641023 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.641072 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:12.141056852 +0000 UTC m=+21.384890621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.641466 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.642420 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.642469 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.642780 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.642927 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.642943 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.643586 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.643685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.643901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.643957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.644309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.646334 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.647012 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.647541 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652324 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652337 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652816 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.652983 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.653276 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.653412 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.653501 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.655642 4782 scope.go:117] "RemoveContainer" containerID="fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.656224 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.656388 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.656544 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.656947 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.657418 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.657747 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.657772 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.657785 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:11 crc kubenswrapper[4782]: E1124 11:56:11.657838 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:12.157819338 +0000 UTC m=+21.401653207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.657965 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.658038 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.658160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.658707 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.658742 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.659446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.659532 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.659813 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.660207 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.660339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.659628 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.663703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.663911 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.668948 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.669119 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.673774 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.676033 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.682856 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.687571 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.696137 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.703562 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.714673 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723578 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723647 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723735 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723756 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723779 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723811 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723841 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723850 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723859 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723869 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723878 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723947 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723958 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723966 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723890 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.723975 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724099 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724114 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724126 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724137 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724148 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724160 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724173 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724184 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724195 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724206 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724218 4782 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724231 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724243 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724254 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724266 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724277 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724287 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724298 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724311 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724323 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724334 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724346 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724357 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724385 4782 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724397 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724408 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724418 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724429 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724440 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724451 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724460 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724470 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724480 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724490 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724501 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724511 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724521 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724531 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724541 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724550 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724560 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724569 4782 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724580 4782 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724591 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724601 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724611 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724620 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724631 4782 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724641 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724652 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724662 4782 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724673 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724684 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.724695 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.731801 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.739454 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.747116 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.755209 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.768791 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.779012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.815494 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.817560 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.821784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.825209 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.825637 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.829543 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.836175 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: W1124 11:56:11.839160 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-bed9eb1f100aa8eb64b36230a28858c224c9d64e774bf9243a45e28da4bb45ec WatchSource:0}: Error finding container bed9eb1f100aa8eb64b36230a28858c224c9d64e774bf9243a45e28da4bb45ec: Status 404 returned error can't find the container with id bed9eb1f100aa8eb64b36230a28858c224c9d64e774bf9243a45e28da4bb45ec Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.839936 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.845100 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.851053 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.855874 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.866583 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.879324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.893558 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.907758 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.938273 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.951717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.961051 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:11 crc kubenswrapper[4782]: I1124 11:56:11.970927 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.061808 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.071961 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.082184 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.094114 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.106092 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.118502 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.127688 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.127773 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.127820 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:13.127793771 +0000 UTC m=+22.371627540 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.127846 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.127888 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:13.127875513 +0000 UTC m=+22.371709282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.129313 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.140072 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.149951 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.228341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.228423 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.228485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228560 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228590 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228608 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228649 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228655 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:13.228636542 +0000 UTC m=+22.472470321 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228662 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228597 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228699 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:13.228686473 +0000 UTC m=+22.472520262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228708 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:12 crc kubenswrapper[4782]: E1124 11:56:12.228767 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:13.228759315 +0000 UTC m=+22.472593104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.373984 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.622827 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.622894 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.622909 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d2d7f3c5d61f1e0ff7184ba3aa0f66391e37e21c6f714e9b3d84bfd8771bf057"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.624084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bed9eb1f100aa8eb64b36230a28858c224c9d64e774bf9243a45e28da4bb45ec"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.625355 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.625413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c667a78cafd35c61c4cda087dd3935ca5bae5b90f7542cd729e3661cf324248e"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.627666 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.651584 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb"} Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.651749 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.665299 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.674066 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.691191 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.717609 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.735092 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.749216 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.758413 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.771671 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.783956 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.810346 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.828477 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.845103 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.866314 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.890299 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.909440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.930680 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fp44f"] Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.930959 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.931545 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xrshv"] Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.931720 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.935022 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.935760 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.935914 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.936077 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.940796 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.940886 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.940927 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.941053 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.942606 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-system-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954780 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-bin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-netns\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954815 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-conf-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954829 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-multus-certs\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954847 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2d328d-ab01-42e7-9583-865d1b5516d8-hosts-file\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954877 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-kubelet\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-multus-daemon-config\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-socket-dir-parent\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-multus\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-etc-kubernetes\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954968 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdnm\" (UniqueName: \"kubernetes.io/projected/56de1ffb-9734-4992-b477-591dfae5ad41-kube-api-access-zvdnm\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.954988 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-cnibin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.955004 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-k8s-cni-cncf-io\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.955017 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-os-release\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.955032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-cni-binary-copy\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.955045 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-hostroot\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.955062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmb6x\" (UniqueName: \"kubernetes.io/projected/ec2d328d-ab01-42e7-9583-865d1b5516d8-kube-api-access-lmb6x\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.962602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:12 crc kubenswrapper[4782]: I1124 11:56:12.987803 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.003128 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.011610 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.023309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.049124 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-cnibin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055266 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-k8s-cni-cncf-io\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-hostroot\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmb6x\" (UniqueName: \"kubernetes.io/projected/ec2d328d-ab01-42e7-9583-865d1b5516d8-kube-api-access-lmb6x\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-os-release\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-cni-binary-copy\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055353 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-system-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055366 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-bin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055397 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-multus-certs\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2d328d-ab01-42e7-9583-865d1b5516d8-hosts-file\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055423 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-netns\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-conf-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055464 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-kubelet\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-multus-daemon-config\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-etc-kubernetes\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055506 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-socket-dir-parent\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055522 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-multus\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055542 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdnm\" (UniqueName: \"kubernetes.io/projected/56de1ffb-9734-4992-b477-591dfae5ad41-kube-api-access-zvdnm\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-cnibin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055795 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-k8s-cni-cncf-io\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.055815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-hostroot\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-os-release\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056187 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-conf-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056668 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-system-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056737 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-bin\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056763 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-multus-certs\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056795 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2d328d-ab01-42e7-9583-865d1b5516d8-hosts-file\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056824 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-run-netns\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-etc-kubernetes\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.056867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-kubelet\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.057026 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-cni-binary-copy\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.057079 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-host-var-lib-cni-multus\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.057115 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-socket-dir-parent\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.057151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56de1ffb-9734-4992-b477-591dfae5ad41-multus-cni-dir\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.057413 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/56de1ffb-9734-4992-b477-591dfae5ad41-multus-daemon-config\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.072391 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.073632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdnm\" (UniqueName: \"kubernetes.io/projected/56de1ffb-9734-4992-b477-591dfae5ad41-kube-api-access-zvdnm\") pod \"multus-fp44f\" (UID: \"56de1ffb-9734-4992-b477-591dfae5ad41\") " pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.073820 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmb6x\" (UniqueName: \"kubernetes.io/projected/ec2d328d-ab01-42e7-9583-865d1b5516d8-kube-api-access-lmb6x\") pod \"node-resolver-xrshv\" (UID: \"ec2d328d-ab01-42e7-9583-865d1b5516d8\") " pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.085799 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.098492 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.110854 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.156255 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.156325 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.156427 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.156475 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:15.156463167 +0000 UTC m=+24.400296936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.156784 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:15.156775686 +0000 UTC m=+24.400609445 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.243021 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fp44f" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.249028 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xrshv" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.257071 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.257240 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257199 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257301 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257314 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257355 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:15.257341659 +0000 UTC m=+24.501175428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257656 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257718 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:15.257710709 +0000 UTC m=+24.501544478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.257272 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257819 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257831 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257838 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.257870 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:15.257864213 +0000 UTC m=+24.501697982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:13 crc kubenswrapper[4782]: W1124 11:56:13.262302 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56de1ffb_9734_4992_b477_591dfae5ad41.slice/crio-ebfa0e75fe6dd6125d2fbde3f7a1dd1f2e0d465eac78ae236eb333c866cb770d WatchSource:0}: Error finding container ebfa0e75fe6dd6125d2fbde3f7a1dd1f2e0d465eac78ae236eb333c866cb770d: Status 404 returned error can't find the container with id ebfa0e75fe6dd6125d2fbde3f7a1dd1f2e0d465eac78ae236eb333c866cb770d Nov 24 11:56:13 crc kubenswrapper[4782]: W1124 11:56:13.263998 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec2d328d_ab01_42e7_9583_865d1b5516d8.slice/crio-37f57148f477d21ef9cbbb8af815af281033cd3effc6930c2042f6551a4ded27 WatchSource:0}: Error finding container 37f57148f477d21ef9cbbb8af815af281033cd3effc6930c2042f6551a4ded27: Status 404 returned error can't find the container with id 37f57148f477d21ef9cbbb8af815af281033cd3effc6930c2042f6551a4ded27 Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.333229 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zzzxx"] Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.334176 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.337163 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xg6cl"] Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.337456 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.341275 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.341644 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.342863 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-76dcq"] Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.341865 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.350896 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.351111 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.351197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.351541 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.355796 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.356554 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.356576 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.357728 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.357546 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358104 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/078c4346-9841-4870-a8b8-de6911b24498-proxy-tls\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358147 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358185 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358202 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4ld\" (UniqueName: \"kubernetes.io/projected/0f32dbad-0f9c-401b-89f2-a5069455e025-kube-api-access-mk4ld\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358217 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-os-release\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358230 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-binary-copy\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358244 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358293 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-system-cni-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm8qn\" (UniqueName: \"kubernetes.io/projected/078c4346-9841-4870-a8b8-de6911b24498-kube-api-access-fm8qn\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4xp\" (UniqueName: \"kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358517 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/078c4346-9841-4870-a8b8-de6911b24498-rootfs\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358530 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/078c4346-9841-4870-a8b8-de6911b24498-mcd-auth-proxy-config\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358543 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358556 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358569 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-tuning-conf-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358582 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358597 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-cnibin\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358610 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.358835 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.366845 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.367036 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.398919 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.442549 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-os-release\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459673 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-binary-copy\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459810 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-system-cni-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.459949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460107 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460285 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-system-cni-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460533 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460519 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460622 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460642 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460650 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm8qn\" (UniqueName: \"kubernetes.io/projected/078c4346-9841-4870-a8b8-de6911b24498-kube-api-access-fm8qn\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460668 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460717 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km4xp\" (UniqueName: \"kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460733 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/078c4346-9841-4870-a8b8-de6911b24498-rootfs\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/078c4346-9841-4870-a8b8-de6911b24498-mcd-auth-proxy-config\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-tuning-conf-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460800 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460816 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460828 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-os-release\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460841 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-cnibin\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460857 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460866 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/078c4346-9841-4870-a8b8-de6911b24498-proxy-tls\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460937 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460951 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460965 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460984 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4ld\" (UniqueName: \"kubernetes.io/projected/0f32dbad-0f9c-401b-89f2-a5069455e025-kube-api-access-mk4ld\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-binary-copy\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461112 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.460884 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/078c4346-9841-4870-a8b8-de6911b24498-rootfs\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461400 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461438 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/078c4346-9841-4870-a8b8-de6911b24498-mcd-auth-proxy-config\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-tuning-conf-dir\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461709 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.461978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462014 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462016 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462037 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f32dbad-0f9c-401b-89f2-a5069455e025-cnibin\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462503 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.462550 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0f32dbad-0f9c-401b-89f2-a5069455e025-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.468527 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.469476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/078c4346-9841-4870-a8b8-de6911b24498-proxy-tls\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.469766 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.487815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4ld\" (UniqueName: \"kubernetes.io/projected/0f32dbad-0f9c-401b-89f2-a5069455e025-kube-api-access-mk4ld\") pod \"multus-additional-cni-plugins-76dcq\" (UID: \"0f32dbad-0f9c-401b-89f2-a5069455e025\") " pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.491224 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.491343 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.491450 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.491611 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.491227 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:13 crc kubenswrapper[4782]: E1124 11:56:13.491792 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.492361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm8qn\" (UniqueName: \"kubernetes.io/projected/078c4346-9841-4870-a8b8-de6911b24498-kube-api-access-fm8qn\") pod \"machine-config-daemon-xg6cl\" (UID: \"078c4346-9841-4870-a8b8-de6911b24498\") " pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.499914 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km4xp\" (UniqueName: \"kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp\") pod \"ovnkube-node-zzzxx\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.500551 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.501086 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.502245 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.502932 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.503900 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.504439 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.505015 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.507749 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.508367 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.509441 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.509988 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.512446 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.513051 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.515849 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.516401 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.517827 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.518659 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.519766 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.520175 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.520766 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.521855 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.522453 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.523168 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.524864 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.525708 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.526639 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.527364 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.528826 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.529867 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.530629 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.531663 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.532222 4782 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.532530 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.535103 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.535810 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.536447 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.536595 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.538285 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.540743 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.541384 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.542582 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.543347 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.544273 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.544901 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.545911 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.547166 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.547644 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.548603 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.549109 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.551242 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.551777 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.552265 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.553425 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.553912 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.555195 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.556146 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.561294 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.573318 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.586651 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.602286 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.613692 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.627136 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.633489 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xrshv" event={"ID":"ec2d328d-ab01-42e7-9583-865d1b5516d8","Type":"ContainerStarted","Data":"983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a"} Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.633530 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xrshv" event={"ID":"ec2d328d-ab01-42e7-9583-865d1b5516d8","Type":"ContainerStarted","Data":"37f57148f477d21ef9cbbb8af815af281033cd3effc6930c2042f6551a4ded27"} Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.634742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerStarted","Data":"06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f"} Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.634799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerStarted","Data":"ebfa0e75fe6dd6125d2fbde3f7a1dd1f2e0d465eac78ae236eb333c866cb770d"} Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.643687 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.657486 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.666042 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.666928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.680817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-76dcq" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.685461 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.709703 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.731220 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.750406 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.770663 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.794827 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.807527 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.819697 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.846044 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.880741 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.920168 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.955172 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:13 crc kubenswrapper[4782]: I1124 11:56:13.992865 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.045249 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.082163 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.144873 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.163675 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.204301 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.241987 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.283624 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.315433 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.357414 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.399060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.437326 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.638632 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.640327 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707" exitCode=0 Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.640401 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.640476 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerStarted","Data":"6d789c231ee4f67732d84e9a86b9c5505a2e4cafefdced683461e33a3ed0a48a"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.643201 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.643247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.643285 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"3056e460085386e732a71be0fbd0e44dad4e384c03b61e1cfe3b7a79be25fcb5"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.646193 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f" exitCode=0 Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.646558 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.646611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"05451d65c5f48ea004654d71a092f74208c9c1b465208c341ca64406bd5a6dc8"} Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.680510 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.698243 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.710705 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.728254 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.750507 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.763485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.778155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.789729 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.809776 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.837270 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.890588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:14 crc kubenswrapper[4782]: I1124 11:56:14.917479 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.001924 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.036663 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.090894 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.114608 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.136468 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.157216 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.176355 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.176485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.176564 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.176566 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:19.176541969 +0000 UTC m=+28.420375738 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.176627 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:19.176620451 +0000 UTC m=+28.420454220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.195294 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.234861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.274124 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.277467 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.277519 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.277545 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.277642 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.277687 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:19.277674248 +0000 UTC m=+28.521508017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.277969 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.277988 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.277996 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.278022 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:19.278014577 +0000 UTC m=+28.521848346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.278059 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.278067 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.278073 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.278089 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:19.278084339 +0000 UTC m=+28.521918108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.314865 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.357669 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.408808 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.435492 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.484099 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.490300 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.490417 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.490473 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.490416 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.490305 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:15 crc kubenswrapper[4782]: E1124 11:56:15.490529 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.655120 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec" exitCode=0 Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.655492 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.675586 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.675711 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.675725 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.675735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.675745 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a"} Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.678715 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.686480 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xfk7b"] Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.694004 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.697072 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.697135 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.698153 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.698581 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.701947 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.719804 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.739279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.767667 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.782119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgq2s\" (UniqueName: \"kubernetes.io/projected/8e40dc18-6890-40a2-be2c-f40d806dc39b-kube-api-access-lgq2s\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.782179 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e40dc18-6890-40a2-be2c-f40d806dc39b-host\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.782235 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e40dc18-6890-40a2-be2c-f40d806dc39b-serviceca\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.798665 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.834290 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.881538 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.882729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e40dc18-6890-40a2-be2c-f40d806dc39b-serviceca\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.882795 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgq2s\" (UniqueName: \"kubernetes.io/projected/8e40dc18-6890-40a2-be2c-f40d806dc39b-kube-api-access-lgq2s\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.882828 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e40dc18-6890-40a2-be2c-f40d806dc39b-host\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.882908 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e40dc18-6890-40a2-be2c-f40d806dc39b-host\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.883998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e40dc18-6890-40a2-be2c-f40d806dc39b-serviceca\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.933104 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgq2s\" (UniqueName: \"kubernetes.io/projected/8e40dc18-6890-40a2-be2c-f40d806dc39b-kube-api-access-lgq2s\") pod \"node-ca-xfk7b\" (UID: \"8e40dc18-6890-40a2-be2c-f40d806dc39b\") " pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.940497 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:15 crc kubenswrapper[4782]: I1124 11:56:15.978486 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:15Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.013582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xfk7b" Nov 24 11:56:16 crc kubenswrapper[4782]: W1124 11:56:16.024556 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e40dc18_6890_40a2_be2c_f40d806dc39b.slice/crio-2f34521e79b64481de3d24b134440b292cfb42fd12a344215b8b5322ceda366d WatchSource:0}: Error finding container 2f34521e79b64481de3d24b134440b292cfb42fd12a344215b8b5322ceda366d: Status 404 returned error can't find the container with id 2f34521e79b64481de3d24b134440b292cfb42fd12a344215b8b5322ceda366d Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.027865 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.064423 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.098164 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.136810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.177741 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.217188 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.257868 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.295216 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.333180 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.376601 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.416271 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.453564 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.498805 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.533860 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.578356 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.615469 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.660746 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.679162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xfk7b" event={"ID":"8e40dc18-6890-40a2-be2c-f40d806dc39b","Type":"ContainerStarted","Data":"1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6"} Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.679216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xfk7b" event={"ID":"8e40dc18-6890-40a2-be2c-f40d806dc39b","Type":"ContainerStarted","Data":"2f34521e79b64481de3d24b134440b292cfb42fd12a344215b8b5322ceda366d"} Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.682867 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738" exitCode=0 Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.682917 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738"} Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.687302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5"} Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.703838 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.736324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.779272 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.822279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.857931 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.895606 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.934187 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:16 crc kubenswrapper[4782]: I1124 11:56:16.979937 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.015529 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.054504 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.094806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.133867 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.186516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.217830 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.256772 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.294626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.333938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.374677 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.415823 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.462547 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.497546 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.497695 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.497802 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.497912 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.498387 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.498455 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.511639 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.535623 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.574010 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.624615 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.656158 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.692156 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54" exitCode=0 Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.692198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54"} Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.696139 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.697736 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.699305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.699351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.699365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.699441 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.755812 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.768458 4782 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.768699 4782 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.774300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.774338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.774345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.774357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.774366 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.785634 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.789129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.789157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.789166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.789178 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.789186 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.809410 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.813861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.813900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.813912 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.813929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.813939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.818967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.829814 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.832981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.833012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.833022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.833034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.833044 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.845003 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.852790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.852844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.852855 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.852871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.852882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.858708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.870176 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: E1124 11:56:17.870282 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.872584 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.872628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.872660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.872676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.872687 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.895801 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.940336 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.974615 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.974651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.974661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.974675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.974685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:17Z","lastTransitionTime":"2025-11-24T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:17 crc kubenswrapper[4782]: I1124 11:56:17.975279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:17Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.014437 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.057080 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.076552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.076598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.076611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.076628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.076958 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.093738 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.135213 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.176523 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.179000 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.179032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.179042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.179055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.179065 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.214804 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.254354 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.280885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.280929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.280940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.280955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.280967 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.299129 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.338067 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.378217 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.389392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.389446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.389460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.389477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.389498 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.491644 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.491681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.491690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.491703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.491712 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.594000 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.594041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.594053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.594069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.594081 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.695518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.695552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.695561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.695573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.695581 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.698604 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f" exitCode=0 Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.698676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.704028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.720825 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.733757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.773583 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.800227 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.801927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.801960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.801972 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.801989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.802002 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.812962 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.832349 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.856755 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.870841 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.881717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.891812 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.900680 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.903899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.903930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.903942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.903957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.903968 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:18Z","lastTransitionTime":"2025-11-24T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.917720 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.929902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:18 crc kubenswrapper[4782]: I1124 11:56:18.943771 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.006002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.006239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.006317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.006411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.006485 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.109147 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.109344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.109704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.109883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.110036 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.212824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.213285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.213393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.213466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.213523 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.215143 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.215294 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.215421 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.215527 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.21551258 +0000 UTC m=+36.459346349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.215708 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.215667544 +0000 UTC m=+36.459501343 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.315560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.315606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.315617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.315634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.315646 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.316160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.316299 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.316437 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.316537 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.316751 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.31673315 +0000 UTC m=+36.560566929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.316600 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.316933 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.316635 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.317041 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.317056 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.317116 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.31709963 +0000 UTC m=+36.560933429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.317021 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.317412 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.317398489 +0000 UTC m=+36.561232268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.418275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.418323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.418334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.418352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.418364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.490147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.490212 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.490456 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.490263 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.490774 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:19 crc kubenswrapper[4782]: E1124 11:56:19.490582 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.520887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.520926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.520935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.520949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.520959 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.622603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.622658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.622671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.622690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.622700 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.715901 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f32dbad-0f9c-401b-89f2-a5069455e025" containerID="ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f" exitCode=0 Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.715937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerDied","Data":"ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.725486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.725518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.725528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.725544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.725553 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.731866 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.748958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.767931 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.784494 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.799237 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.809632 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.821739 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.826874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.826895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.826903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.826916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.826924 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.832590 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.847123 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.857231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.875841 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.888356 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.900329 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.912434 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.928226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.928254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.928262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.928277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:19 crc kubenswrapper[4782]: I1124 11:56:19.928285 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:19Z","lastTransitionTime":"2025-11-24T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.030093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.030129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.030172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.030187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.030197 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.133463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.133496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.133508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.133522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.133532 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.235247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.235279 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.235291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.235308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.235318 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.337941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.337978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.337989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.338003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.338014 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.440824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.440883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.440895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.440913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.440924 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.542899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.542929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.542939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.542954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.542964 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.645940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.645999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.646023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.646051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.646072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.729093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" event={"ID":"0f32dbad-0f9c-401b-89f2-a5069455e025","Type":"ContainerStarted","Data":"dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.735526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.736118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.736415 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.748761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.748800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.748813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.748829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.748841 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.751432 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.767179 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.782731 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.782839 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.784078 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.804914 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.819126 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.836672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.851144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.851178 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.851188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.851202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.851210 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.857300 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.871832 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.884397 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.894655 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.911443 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.924733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.937310 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.950139 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.953740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.953795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.953812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.953833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.953849 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:20Z","lastTransitionTime":"2025-11-24T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.967459 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.982919 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:20 crc kubenswrapper[4782]: I1124 11:56:20.994714 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.010942 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.024975 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.037672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.050201 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.056003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.056040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.056050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.056067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.056078 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.061471 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.072708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.083142 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.101850 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.114447 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.125537 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.137558 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.158390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.158422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.158429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.158442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.158451 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.260409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.260445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.260456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.260473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.260484 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.363087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.363158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.363177 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.363207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.363229 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.465980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.466024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.466033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.466050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.466062 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.490462 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.490535 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.490576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:21 crc kubenswrapper[4782]: E1124 11:56:21.490650 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:21 crc kubenswrapper[4782]: E1124 11:56:21.490704 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:21 crc kubenswrapper[4782]: E1124 11:56:21.490783 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.506015 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.521265 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.536274 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.553093 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.565226 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.567413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.567455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.567489 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.567507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.567520 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.574363 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.595983 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.612907 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.624702 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.638536 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.650784 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.663161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.669356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.669416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.669427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.669439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.669449 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.672133 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.687861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.738465 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.771584 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.771617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.771626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.771656 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.771665 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.875893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.875948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.875957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.875969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.875993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.981600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.981885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.981897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.981941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:21 crc kubenswrapper[4782]: I1124 11:56:21.981955 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:21Z","lastTransitionTime":"2025-11-24T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.085198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.085243 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.085261 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.085283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.085297 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.188093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.188148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.188171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.188192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.188207 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.290674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.290711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.290721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.290734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.290750 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.393486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.393561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.393587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.393617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.393640 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.496441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.496485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.496493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.496507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.496518 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.598399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.598442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.598453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.598470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.598484 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.700402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.700443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.700454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.700469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.700504 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.741072 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.802039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.802311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.802413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.802488 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.802558 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.905674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.905963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.906060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.906170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:22 crc kubenswrapper[4782]: I1124 11:56:22.906291 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:22Z","lastTransitionTime":"2025-11-24T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.008383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.008429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.008440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.008481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.008495 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.110725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.110768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.110784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.110804 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.110818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.213001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.213028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.213054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.213067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.213075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.233737 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.250654 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.260603 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.271909 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.281088 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.291980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.303982 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.314647 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.315059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.315140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.315158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.315214 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.315232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.324449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.334943 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.346083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.355847 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.376692 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.389917 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.404024 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.418061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.418140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.418153 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.418172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.418184 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.491695 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.491799 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:23 crc kubenswrapper[4782]: E1124 11:56:23.491911 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.491970 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:23 crc kubenswrapper[4782]: E1124 11:56:23.492029 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:23 crc kubenswrapper[4782]: E1124 11:56:23.492189 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.520885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.520952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.520964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.520982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.520994 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.624277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.624311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.624322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.624337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.624346 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.727308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.727446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.727469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.727495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.727514 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.746083 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/0.log" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.749778 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae" exitCode=1 Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.749832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.750937 4782 scope.go:117] "RemoveContainer" containerID="c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.769213 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.791358 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.808217 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.821461 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.830693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.830730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.830740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.830752 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.830763 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.836938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.848826 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.861712 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.878238 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.893236 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.906506 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.919028 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.933504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.933538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.933547 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.933560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.933570 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:23Z","lastTransitionTime":"2025-11-24T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.947326 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.958506 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:23 crc kubenswrapper[4782]: I1124 11:56:23.976323 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:23Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.036334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.036398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.036410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.036426 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.036437 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.139346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.139416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.139429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.139443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.139454 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.241941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.241980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.241992 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.242008 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.242022 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.344768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.344804 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.344812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.344825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.344834 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.447780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.447827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.447838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.447854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.447864 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.550212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.550248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.550256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.550270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.550279 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.652837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.652881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.652891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.652907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.652918 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.757851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.757893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.757905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.757922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.757934 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.758966 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/0.log" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.761212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.761332 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.777281 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.788816 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.801307 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.813859 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.824203 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.836970 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.847235 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.860059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.860251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.860441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.860550 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.860669 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.865038 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.877623 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.891271 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.906040 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.922408 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.940111 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.956801 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:24Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.962661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.962702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.962713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.962727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:24 crc kubenswrapper[4782]: I1124 11:56:24.962737 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:24Z","lastTransitionTime":"2025-11-24T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.065459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.065505 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.065516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.065531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.065542 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.168600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.168644 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.168655 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.168671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.168683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.271032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.271073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.271084 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.271099 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.271109 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.373298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.373334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.373343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.373355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.373363 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.475717 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.475748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.475760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.475775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.475785 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.490586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.490622 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.490628 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:25 crc kubenswrapper[4782]: E1124 11:56:25.490717 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:25 crc kubenswrapper[4782]: E1124 11:56:25.490923 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:25 crc kubenswrapper[4782]: E1124 11:56:25.491044 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.578748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.578818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.578840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.578869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.578891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.681533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.681606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.681627 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.681655 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.681678 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.767708 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/1.log" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.768391 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/0.log" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.772031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.772115 4782 scope.go:117] "RemoveContainer" containerID="c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.772242 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33" exitCode=1 Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.773470 4782 scope.go:117] "RemoveContainer" containerID="a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33" Nov 24 11:56:25 crc kubenswrapper[4782]: E1124 11:56:25.773788 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.785406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.785433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.785442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.785456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.785463 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.795131 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.810324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.821281 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.840803 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.856882 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.870726 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.883800 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.887199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.887230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.887241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.887257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.887271 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.901019 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.916337 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.927578 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.942704 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.956060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.968545 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.982759 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.989482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.989521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.989535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.989554 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:25 crc kubenswrapper[4782]: I1124 11:56:25.989570 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:25Z","lastTransitionTime":"2025-11-24T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.032429 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh"] Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.032973 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.035112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.035221 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.048858 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.061672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.072454 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.087511 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.087557 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6ss\" (UniqueName: \"kubernetes.io/projected/c4a58354-29ed-4cba-8422-4a433c8ec2ba-kube-api-access-4n6ss\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.087612 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.087674 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.088485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.092216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.092286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.092302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.092320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.092331 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.103569 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.118841 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.128783 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.143251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.153312 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.165864 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.182168 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.188920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.188966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6ss\" (UniqueName: \"kubernetes.io/projected/c4a58354-29ed-4cba-8422-4a433c8ec2ba-kube-api-access-4n6ss\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.189002 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.189021 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.189716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.189845 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201638 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201649 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.201684 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.202024 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4a58354-29ed-4cba-8422-4a433c8ec2ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.210484 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6ss\" (UniqueName: \"kubernetes.io/projected/c4a58354-29ed-4cba-8422-4a433c8ec2ba-kube-api-access-4n6ss\") pod \"ovnkube-control-plane-749d76644c-w5bnh\" (UID: \"c4a58354-29ed-4cba-8422-4a433c8ec2ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.211962 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.231311 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.245556 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.303580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.303617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.303630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.303645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.303657 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.348059 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" Nov 24 11:56:26 crc kubenswrapper[4782]: W1124 11:56:26.367899 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a58354_29ed_4cba_8422_4a433c8ec2ba.slice/crio-e10f5172d7efae982ac99709f0da681bc0d4933acac9aa82a59c10598e57167c WatchSource:0}: Error finding container e10f5172d7efae982ac99709f0da681bc0d4933acac9aa82a59c10598e57167c: Status 404 returned error can't find the container with id e10f5172d7efae982ac99709f0da681bc0d4933acac9aa82a59c10598e57167c Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.407058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.407102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.407115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.407133 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.407148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.509879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.509921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.509932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.509949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.509959 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.612898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.612943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.612964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.612990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.613009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.715780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.715819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.715831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.715846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.715860 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.777452 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" event={"ID":"c4a58354-29ed-4cba-8422-4a433c8ec2ba","Type":"ContainerStarted","Data":"4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.777501 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" event={"ID":"c4a58354-29ed-4cba-8422-4a433c8ec2ba","Type":"ContainerStarted","Data":"8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.777516 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" event={"ID":"c4a58354-29ed-4cba-8422-4a433c8ec2ba","Type":"ContainerStarted","Data":"e10f5172d7efae982ac99709f0da681bc0d4933acac9aa82a59c10598e57167c"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.779464 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/1.log" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.800065 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.814014 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.817690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.817728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.817739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.817754 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.817763 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.823604 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.841019 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.851534 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.861531 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.872331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.882320 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.892571 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.903440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.912626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.920353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.920415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.920427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.920445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.920456 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:26Z","lastTransitionTime":"2025-11-24T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.926614 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.940516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.950628 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:26 crc kubenswrapper[4782]: I1124 11:56:26.965639 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.022751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.022806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.022827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.022859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.022883 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.125524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.125573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.125586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.125604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.125617 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.155555 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fvr97"] Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.159116 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.159230 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.178117 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.190885 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.199036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.199220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt5gv\" (UniqueName: \"kubernetes.io/projected/1e8feb84-86f6-4afe-9563-42016a7cd6ca-kube-api-access-xt5gv\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.202882 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.218459 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.227546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.227581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.227590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.227604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.227614 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.232692 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.246169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.258731 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.277941 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9df6211b2f4273af7e91e463899b09d3bc165abeedc78473a1827df6bc0f4ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"message\\\":\\\"124 11:56:23.061396 6018 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:23.061442 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.061641 6018 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.061794 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062041 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:23.062135 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:56:23.062284 6018 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:23.062532 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.290947 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.300593 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.300741 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:56:43.300713317 +0000 UTC m=+52.544547106 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.301192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.301274 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.301299 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt5gv\" (UniqueName: \"kubernetes.io/projected/1e8feb84-86f6-4afe-9563-42016a7cd6ca-kube-api-access-xt5gv\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.301680 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.301742 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:27.801724225 +0000 UTC m=+37.045558004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.301975 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.302167 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:43.302140646 +0000 UTC m=+52.545974445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.304274 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.318757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.319623 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt5gv\" (UniqueName: \"kubernetes.io/projected/1e8feb84-86f6-4afe-9563-42016a7cd6ca-kube-api-access-xt5gv\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.330253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.330297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.330308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.330337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.330349 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.332724 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.345262 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.356448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.367689 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.378423 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.402599 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.402652 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.402698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402788 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402788 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402824 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402866 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:43.402823582 +0000 UTC m=+52.646657361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402867 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402788 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402921 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402933 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402902 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:43.402894683 +0000 UTC m=+52.646728472 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.402977 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:56:43.402965305 +0000 UTC m=+52.646799084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.433215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.433252 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.433263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.433278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.433290 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.490629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.490629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.490767 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.490832 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.490866 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.491004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.536180 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.536213 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.536224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.536239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.536249 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.638805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.638863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.638876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.638892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.638903 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.741323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.741415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.741432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.741461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.741480 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.807322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.807597 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: E1124 11:56:27.807723 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:28.807691685 +0000 UTC m=+38.051525494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.844659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.844706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.844720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.844739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.844755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.948452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.948497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.948509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.948527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:27 crc kubenswrapper[4782]: I1124 11:56:27.948540 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:27Z","lastTransitionTime":"2025-11-24T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.052294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.052670 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.052688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.052708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.052729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.128344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.128421 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.128435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.128452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.128489 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.144000 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.149428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.149466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.149502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.149518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.149531 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.172796 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.176660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.176691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.176700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.176713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.176723 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.191165 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.195517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.195575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.195594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.195680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.195700 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.210513 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.215211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.215263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.215281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.215306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.215323 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.231774 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.232138 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.237712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.237762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.237773 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.237793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.237805 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.340752 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.340792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.340801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.340816 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.340829 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.443075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.443963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.444098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.444216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.444327 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.489933 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.490080 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.546395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.546659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.546731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.546847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.546943 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.649302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.649366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.649392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.649413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.649455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.752498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.752565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.752586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.752617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.752640 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.818942 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.819104 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:28 crc kubenswrapper[4782]: E1124 11:56:28.819566 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:30.819538825 +0000 UTC m=+40.063372624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.855092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.855681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.855847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.856033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.856180 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.960583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.960640 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.960656 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.960686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:28 crc kubenswrapper[4782]: I1124 11:56:28.960708 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:28Z","lastTransitionTime":"2025-11-24T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.062956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.063012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.063030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.063055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.063076 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.165034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.165107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.165128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.165156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.165176 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.267514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.267576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.267593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.267617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.267636 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.370365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.370472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.370495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.370524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.370543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.472535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.472615 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.472709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.472743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.472767 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.490151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.490211 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.490249 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:29 crc kubenswrapper[4782]: E1124 11:56:29.490275 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:29 crc kubenswrapper[4782]: E1124 11:56:29.490438 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:29 crc kubenswrapper[4782]: E1124 11:56:29.490598 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.574496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.574533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.574542 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.574554 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.574564 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.676905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.676948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.676956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.676969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.676978 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.778930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.779153 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.779243 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.779336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.779469 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.881548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.881763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.881824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.881922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.882003 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.984532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.984563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.984574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.984587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:29 crc kubenswrapper[4782]: I1124 11:56:29.984596 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:29Z","lastTransitionTime":"2025-11-24T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.087522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.087580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.087602 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.087633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.087659 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.191065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.191341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.191530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.191672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.191793 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.295438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.295489 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.295506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.295528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.295544 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.399098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.399166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.399185 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.399209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.399226 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.490534 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:30 crc kubenswrapper[4782]: E1124 11:56:30.490673 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.502105 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.502147 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.502157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.502190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.502201 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.605211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.605266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.605283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.605307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.605323 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.708361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.708480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.708503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.708532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.708553 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.811956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.812009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.812024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.812046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.812065 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.842824 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:30 crc kubenswrapper[4782]: E1124 11:56:30.843062 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:30 crc kubenswrapper[4782]: E1124 11:56:30.843161 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:34.843137213 +0000 UTC m=+44.086971022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.915266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.915319 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.915329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.915345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:30 crc kubenswrapper[4782]: I1124 11:56:30.915357 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:30Z","lastTransitionTime":"2025-11-24T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.017346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.017398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.017409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.017424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.017454 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.120088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.120150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.120169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.120192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.120208 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.223515 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.223575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.223588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.223607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.223652 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.321506 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.323778 4782 scope.go:117] "RemoveContainer" containerID="a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33" Nov 24 11:56:31 crc kubenswrapper[4782]: E1124 11:56:31.324068 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.326016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.326038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.326046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.326060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.326068 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.340936 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.356791 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.370258 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.395630 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.409861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.424249 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.428567 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.428626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.428646 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.428669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.428685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.439556 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.468191 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.485576 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.490416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.490416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.490508 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:31 crc kubenswrapper[4782]: E1124 11:56:31.490660 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:31 crc kubenswrapper[4782]: E1124 11:56:31.490756 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:31 crc kubenswrapper[4782]: E1124 11:56:31.490853 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.503216 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.518054 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.530936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.530972 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.530983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.530999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.531009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.534574 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.555197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.576262 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.586644 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.599170 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.609981 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.619681 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.630761 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.633475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.633538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.633557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.633583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.633601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.654410 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.667355 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.682189 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.695967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.705935 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.715440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.728157 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.735525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.735567 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.735575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.735589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.735598 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.739012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.751726 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.764639 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.780449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.790838 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.805665 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.837704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.837741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.837751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.837766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.837778 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.940501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.940606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.940628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.940656 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:31 crc kubenswrapper[4782]: I1124 11:56:31.940680 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:31Z","lastTransitionTime":"2025-11-24T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.044519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.044613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.044632 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.044661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.044678 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.147883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.147948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.147965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.147989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.148006 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.251075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.251448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.251477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.251503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.251520 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.354748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.354809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.354830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.354854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.354873 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.458021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.458117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.458134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.458158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.458177 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.490401 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:32 crc kubenswrapper[4782]: E1124 11:56:32.490602 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.560663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.560722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.560740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.560763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.560782 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.662778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.663108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.663215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.663283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.663351 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.766206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.766543 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.766659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.766778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.766879 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.870025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.870061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.870071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.870087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.870100 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.972764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.972976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.973039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.973100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:32 crc kubenswrapper[4782]: I1124 11:56:32.973164 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:32Z","lastTransitionTime":"2025-11-24T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.075152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.075190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.075198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.075212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.075221 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.177479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.177516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.177526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.177543 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.177553 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.279798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.279842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.279856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.279872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.279884 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.382316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.382440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.382464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.382496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.382519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.485819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.485868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.485884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.485906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.485924 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.492495 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.492564 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.492598 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:33 crc kubenswrapper[4782]: E1124 11:56:33.492764 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:33 crc kubenswrapper[4782]: E1124 11:56:33.492868 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:33 crc kubenswrapper[4782]: E1124 11:56:33.493035 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.588338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.588413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.588431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.588446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.588457 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.691072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.691184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.691330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.691348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.691360 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.793970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.794018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.794034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.794049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.794061 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.896990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.897059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.897077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.897103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.897121 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.999494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.999555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.999572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.999587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:33 crc kubenswrapper[4782]: I1124 11:56:33.999604 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:33Z","lastTransitionTime":"2025-11-24T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.102428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.102532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.102555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.102577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.102591 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.205536 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.205602 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.205613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.205631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.205645 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.307944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.307988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.308002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.308021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.308033 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.410957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.411007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.411021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.411040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.411053 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.490469 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:34 crc kubenswrapper[4782]: E1124 11:56:34.490678 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.514050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.514089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.514100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.514117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.514129 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.616948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.617007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.617024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.617047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.617064 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.719481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.719530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.719544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.719561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.719572 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.822854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.822918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.822937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.822961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.822980 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.886996 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:34 crc kubenswrapper[4782]: E1124 11:56:34.887188 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:34 crc kubenswrapper[4782]: E1124 11:56:34.887327 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:42.887301183 +0000 UTC m=+52.131134962 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.926183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.926230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.926242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.926265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:34 crc kubenswrapper[4782]: I1124 11:56:34.926277 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:34Z","lastTransitionTime":"2025-11-24T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.029159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.029250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.029262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.029279 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.029290 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.132333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.132402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.132413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.132431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.132441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.235539 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.235588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.235604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.235626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.235645 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.339207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.339278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.339302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.339330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.339353 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.442310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.442407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.442445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.442469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.442488 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.490056 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.490170 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:35 crc kubenswrapper[4782]: E1124 11:56:35.490428 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.490514 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:35 crc kubenswrapper[4782]: E1124 11:56:35.490667 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:35 crc kubenswrapper[4782]: E1124 11:56:35.490811 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.545437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.545476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.545485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.545519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.545527 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.647553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.647585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.647593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.647605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.647614 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.750260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.750313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.750330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.750353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.750430 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.853127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.853188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.853204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.853230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.853254 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.956103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.956166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.956182 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.956207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:35 crc kubenswrapper[4782]: I1124 11:56:35.956224 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:35Z","lastTransitionTime":"2025-11-24T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.058392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.058442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.058454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.058471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.058483 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.161314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.161430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.161448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.161471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.161488 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.264645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.264727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.264749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.264779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.264802 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.367159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.367223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.367249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.367275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.367292 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.469885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.469961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.469979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.470004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.470021 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.490007 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:36 crc kubenswrapper[4782]: E1124 11:56:36.490165 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.573261 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.573409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.573431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.573453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.573472 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.676348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.676462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.676475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.676494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.676507 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.779418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.779482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.779502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.779525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.779543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.882812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.882945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.882971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.882997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.883018 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.986908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.986957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.986968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.986985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:36 crc kubenswrapper[4782]: I1124 11:56:36.986998 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:36Z","lastTransitionTime":"2025-11-24T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.089948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.089982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.089993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.090010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.090021 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.192777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.193096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.193303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.193611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.193837 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.296805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.296846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.296858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.296873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.296884 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.400024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.400125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.400161 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.400195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.400215 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.490001 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.490030 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:37 crc kubenswrapper[4782]: E1124 11:56:37.490213 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.490240 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:37 crc kubenswrapper[4782]: E1124 11:56:37.490346 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:37 crc kubenswrapper[4782]: E1124 11:56:37.490444 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.502258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.502294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.502302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.502316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.502327 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.604740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.604779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.604790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.604806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.604815 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.707004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.707057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.707070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.707088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.707099 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.809944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.810019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.810039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.810061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.810079 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.912922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.913057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.913079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.913107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:37 crc kubenswrapper[4782]: I1124 11:56:37.913130 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:37Z","lastTransitionTime":"2025-11-24T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.017210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.017264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.017283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.017306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.017323 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.120622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.120685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.120704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.120735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.120758 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.224122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.224180 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.224198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.224220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.224240 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.327515 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.327578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.327595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.327619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.327638 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.431814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.431898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.431924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.431962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.431987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.437759 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.437806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.437815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.437831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.437842 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.454123 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.459552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.459587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.459598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.459613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.459624 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.475606 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.480742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.480793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.480803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.480824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.480836 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.492236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.492499 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.497625 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.501709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.501759 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.501775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.501796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.501809 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.521489 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.527129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.527211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.527234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.527265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.527290 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.545553 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:38 crc kubenswrapper[4782]: E1124 11:56:38.545787 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.547723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.547785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.547803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.547829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.547849 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.650850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.650888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.650899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.650914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.650925 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.754511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.754574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.754585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.754606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.754620 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.858241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.858295 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.858314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.858338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.858352 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.960075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.960108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.960116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.960153 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:38 crc kubenswrapper[4782]: I1124 11:56:38.960173 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:38Z","lastTransitionTime":"2025-11-24T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.063336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.063425 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.063441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.063464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.063479 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.168075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.168204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.168241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.168259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.168301 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.270856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.270889 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.270899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.270913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.270923 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.373087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.373146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.373155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.373170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.373179 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.475759 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.475802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.475817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.475837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.475852 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.490632 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:39 crc kubenswrapper[4782]: E1124 11:56:39.490736 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.490946 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:39 crc kubenswrapper[4782]: E1124 11:56:39.491012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.491186 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:39 crc kubenswrapper[4782]: E1124 11:56:39.491269 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.578200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.578236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.578247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.578263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.578275 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.686637 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.686675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.686687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.686704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.686714 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.789903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.789963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.789985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.790015 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.790033 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.892736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.892785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.892804 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.892831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.892852 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.995642 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.995698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.995715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.995738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:39 crc kubenswrapper[4782]: I1124 11:56:39.995756 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:39Z","lastTransitionTime":"2025-11-24T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.098565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.098596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.098606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.098620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.098630 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.201549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.201590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.201601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.201616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.201627 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.305222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.305285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.305303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.305329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.305349 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.408275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.408335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.408361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.408439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.408462 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.490421 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:40 crc kubenswrapper[4782]: E1124 11:56:40.490549 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.511921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.512025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.512045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.512069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.512086 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.616092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.616132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.616141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.616155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.616165 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.718231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.718259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.718268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.718281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.718291 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.821247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.821337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.821408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.821446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.821468 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.924797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.924843 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.924856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.924871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:40 crc kubenswrapper[4782]: I1124 11:56:40.924882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:40Z","lastTransitionTime":"2025-11-24T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.027758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.027842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.027864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.027892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.027916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.130996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.131052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.131069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.131093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.131112 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.233347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.233391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.233400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.233412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.233421 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.335324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.335368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.335426 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.335449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.335466 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.438850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.438903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.438918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.438939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.438953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.490910 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:41 crc kubenswrapper[4782]: E1124 11:56:41.491172 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.491229 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:41 crc kubenswrapper[4782]: E1124 11:56:41.491447 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.491866 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:41 crc kubenswrapper[4782]: E1124 11:56:41.492629 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.517475 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.536552 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.542212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.542278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.542290 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.542307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.542319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.551344 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.573532 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.588083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.602701 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.615139 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.635524 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.644269 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.644302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.644311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.644325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.644334 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.650546 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.664324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.677696 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.691299 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.710182 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.727272 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.738150 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.746960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.746994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.747006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.747020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.747029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.749128 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.789283 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.799016 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.802082 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.815909 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.828716 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.839444 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.849242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.849509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.849574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.849633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.849687 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.851753 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.860737 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.869979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.877144 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.892309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.906242 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.917687 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.927064 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.938449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.950134 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.951356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.951424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.951434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.951447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.951458 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:41Z","lastTransitionTime":"2025-11-24T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.960746 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:41 crc kubenswrapper[4782]: I1124 11:56:41.977001 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.053411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.053441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.053449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.053461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.053469 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.155966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.156065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.156085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.156108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.156163 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.258498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.258548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.258561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.258575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.258586 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.361353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.361442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.361462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.361486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.361505 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.464926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.465019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.465051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.465085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.465131 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.490493 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:42 crc kubenswrapper[4782]: E1124 11:56:42.490801 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.568104 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.568170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.568186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.568210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.568230 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.671358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.671410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.671421 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.671439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.671453 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.774600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.774673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.774701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.774734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.774757 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.878216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.878265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.878282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.878305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.878322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.968881 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:42 crc kubenswrapper[4782]: E1124 11:56:42.969088 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:42 crc kubenswrapper[4782]: E1124 11:56:42.969155 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:56:58.969134319 +0000 UTC m=+68.212968098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.981435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.981516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.981529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.981544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:42 crc kubenswrapper[4782]: I1124 11:56:42.981553 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:42Z","lastTransitionTime":"2025-11-24T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.084454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.084502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.084512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.084527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.084539 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.187276 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.187316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.187324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.187338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.187348 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.291348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.291436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.291452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.291529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.291551 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.373228 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.373420 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:57:15.373398787 +0000 UTC m=+84.617232556 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.373437 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.373606 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.373662 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:57:15.373651113 +0000 UTC m=+84.617484882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.394045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.394155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.394177 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.394205 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.394226 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.474549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.474613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.474662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.474781 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.474839 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:57:15.474822553 +0000 UTC m=+84.718656332 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.474895 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.474964 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.474992 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.475084 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:57:15.475057569 +0000 UTC m=+84.718891368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.475186 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.475206 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.475221 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.475264 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:57:15.475250715 +0000 UTC m=+84.719084514 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.490814 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.490872 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.490974 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.491053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.491063 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:43 crc kubenswrapper[4782]: E1124 11:56:43.491273 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.496420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.496467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.496479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.496495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.496507 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.598915 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.598959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.598971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.598989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.599003 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.701895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.701950 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.701960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.701974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.701983 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.804003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.804052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.804063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.804082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.804094 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.907231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.907267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.907292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.907306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:43 crc kubenswrapper[4782]: I1124 11:56:43.907317 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:43Z","lastTransitionTime":"2025-11-24T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.009700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.009994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.010031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.010060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.010083 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.112875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.112924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.112934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.112948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.112959 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.215157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.215198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.215210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.215223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.215231 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.317433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.317490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.317507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.317532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.317550 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.420323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.420426 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.420454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.420483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.420506 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.490222 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:44 crc kubenswrapper[4782]: E1124 11:56:44.490363 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.522656 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.522703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.522712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.522730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.522742 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.625159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.625227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.625239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.625256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.625269 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.727749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.727788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.727799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.727815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.727825 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.829738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.829772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.829780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.829793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.829801 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.931742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.932034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.932043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.932055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:44 crc kubenswrapper[4782]: I1124 11:56:44.932064 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:44Z","lastTransitionTime":"2025-11-24T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.034282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.034315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.034325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.034338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.034349 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.137227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.137303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.137335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.137363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.137417 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.239834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.239865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.239876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.239893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.239944 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.342324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.342393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.342403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.342417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.342427 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.445659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.445735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.445755 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.445777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.445797 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.490030 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.490083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.490114 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:45 crc kubenswrapper[4782]: E1124 11:56:45.490196 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:45 crc kubenswrapper[4782]: E1124 11:56:45.490316 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:45 crc kubenswrapper[4782]: E1124 11:56:45.490498 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.548891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.549001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.549028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.549064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.549089 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.657559 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.657612 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.657624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.657643 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.657656 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.759798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.759834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.759844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.759857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.759867 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.862443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.862501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.862520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.862544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.862562 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.965251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.965313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.965332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.965357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:45 crc kubenswrapper[4782]: I1124 11:56:45.965434 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:45Z","lastTransitionTime":"2025-11-24T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.067830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.067893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.067910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.067934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.067951 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.170555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.170630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.170643 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.170684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.170702 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.273576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.273650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.273667 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.273692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.273710 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.376321 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.376362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.376385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.376399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.376410 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.479320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.479365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.479394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.479414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.479429 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.489850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:46 crc kubenswrapper[4782]: E1124 11:56:46.490098 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.491557 4782 scope.go:117] "RemoveContainer" containerID="a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.582035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.582097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.582118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.582145 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.582163 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.684603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.684651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.684666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.684685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.684698 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.787346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.787413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.787424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.787438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.787449 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.850812 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/1.log" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.853551 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.853916 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.878467 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.889213 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.889253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.889261 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.889274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.889283 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.895824 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.912023 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.926162 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.937092 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.951201 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.964605 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.984418 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.991279 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.991326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.991343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.991365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:46 crc kubenswrapper[4782]: I1124 11:56:46.991416 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:46Z","lastTransitionTime":"2025-11-24T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.002002 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.026591 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.043246 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.056161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.069949 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.082708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.093400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.093431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.093440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.093452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.093460 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.096230 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.105498 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.117156 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.196102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.196181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.196191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.196206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.196218 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.299035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.299075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.299083 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.299097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.299105 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.401667 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.401716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.401726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.401741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.401753 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.490922 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.490999 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:47 crc kubenswrapper[4782]: E1124 11:56:47.491136 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.491231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:47 crc kubenswrapper[4782]: E1124 11:56:47.491500 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:47 crc kubenswrapper[4782]: E1124 11:56:47.491509 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.504726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.504789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.504807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.504830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.504847 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.608052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.608097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.608107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.608122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.608132 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.711367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.711484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.711501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.711525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.711542 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.813992 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.814045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.814053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.814070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.814079 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.857762 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/2.log" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.858294 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/1.log" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.861750 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" exitCode=1 Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.861786 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.861820 4782 scope.go:117] "RemoveContainer" containerID="a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.862546 4782 scope.go:117] "RemoveContainer" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" Nov 24 11:56:47 crc kubenswrapper[4782]: E1124 11:56:47.862785 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.877328 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.887997 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.899236 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.909062 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.916682 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.916725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.916737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.916753 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.916765 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:47Z","lastTransitionTime":"2025-11-24T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.920572 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.931802 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.942256 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.951242 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.960654 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.968820 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.980292 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:47 crc kubenswrapper[4782]: I1124 11:56:47.993014 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.003200 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.019574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.019614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.019626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.019641 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.019653 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.020110 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2eff3f77173c2c51daa72068032a7896382fb16c8a36a4fb877010dd5a71a33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:24Z\\\",\\\"message\\\":\\\"17.0.92/23] and MAC: 0a:58:0a:d9:00:5c\\\\nI1124 11:56:24.531584 6155 factory.go:656] Stopping watch factory\\\\nI1124 11:56:24.531594 6155 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nI1124 11:56:24.531600 6155 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:56:24.531587 6155 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 1.886331ms, libovsdb time 1.398928ms\\\\nI1124 11:56:24.531612 6155 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI1124 11:56:24.531617 6155 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1124 11:56:24.531602 6155 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" in cache\\\\nI1124 11:56:24.531626 6155 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 4.599675ms)\\\\nI1124 11:56:24.531633 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:56:24.531711 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.029872 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.039571 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.053697 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.121283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.121320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.121329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.121343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.121352 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.224802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.224841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.224852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.224869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.224880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.327339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.327422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.327440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.327465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.327484 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.431677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.431730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.431748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.431807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.431826 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.490988 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.491621 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.534697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.534762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.534778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.534807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.534824 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.588328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.588401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.588413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.588428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.588436 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.602332 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.607099 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.607152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.607168 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.607188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.607203 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.622218 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.626869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.626930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.626942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.626962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.626975 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.641633 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.646735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.646909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.647006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.647102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.647191 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.661800 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.666339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.666405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.666422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.666444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.666457 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.680763 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.681295 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.682764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.682946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.683042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.683135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.683272 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.786355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.786465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.786484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.786512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.786529 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.866854 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/2.log" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.870659 4782 scope.go:117] "RemoveContainer" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" Nov 24 11:56:48 crc kubenswrapper[4782]: E1124 11:56:48.870868 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.883827 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.889571 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.889607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.889619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.889636 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.889648 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.908155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.923931 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.937602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.949768 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.962738 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.973682 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.987588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.991548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.991590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.991600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.991617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:48 crc kubenswrapper[4782]: I1124 11:56:48.991628 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:48Z","lastTransitionTime":"2025-11-24T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.002196 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.014131 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.023265 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.032503 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.042944 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.052104 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.065717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.078804 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.089661 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.094003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.094039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.094050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.094067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.094078 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.196208 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.196688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.196847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.196994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.197128 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.300842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.300892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.300910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.300935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.300953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.403690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.404096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.404230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.404423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.404592 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.490185 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:49 crc kubenswrapper[4782]: E1124 11:56:49.490319 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.490349 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.490184 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:49 crc kubenswrapper[4782]: E1124 11:56:49.490556 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:49 crc kubenswrapper[4782]: E1124 11:56:49.490829 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.507431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.507638 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.507779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.507921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.508062 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.610677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.611117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.611263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.611462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.611633 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.714592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.715007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.715164 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.715302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.715497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.818712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.818830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.818844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.818860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.818872 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.920994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.921041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.921052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.921069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:49 crc kubenswrapper[4782]: I1124 11:56:49.921081 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:49Z","lastTransitionTime":"2025-11-24T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.023444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.023526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.023552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.023582 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.023605 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.127661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.127729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.127745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.127768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.127782 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.230565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.230644 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.230663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.230684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.230699 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.333623 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.333667 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.333680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.333696 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.333707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.436743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.436807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.436825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.436848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.436865 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.490664 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:50 crc kubenswrapper[4782]: E1124 11:56:50.490898 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.539784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.539857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.539881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.539911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.539933 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.643304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.643495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.643574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.643600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.643618 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.747220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.747579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.747691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.747790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.747888 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.851604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.851671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.851688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.851713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.851731 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.953875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.953925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.953934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.953949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:50 crc kubenswrapper[4782]: I1124 11:56:50.953962 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:50Z","lastTransitionTime":"2025-11-24T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.056546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.056609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.056634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.056663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.056680 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.159777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.159832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.159848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.159871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.159888 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.263276 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.263315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.263331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.263352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.263368 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.366440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.366487 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.366498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.366514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.366525 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.468338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.468447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.468470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.468494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.468510 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.490132 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:51 crc kubenswrapper[4782]: E1124 11:56:51.490334 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.490350 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.490443 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:51 crc kubenswrapper[4782]: E1124 11:56:51.490626 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:51 crc kubenswrapper[4782]: E1124 11:56:51.490724 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.509691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.529476 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.545072 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.559267 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.571246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.571284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.571295 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.571310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.571322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.574162 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.590953 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.604978 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.616976 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.631599 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.643684 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.656154 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.666320 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.673855 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.673930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.673947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.673969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.673985 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.683559 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.698972 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.709678 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.732174 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.744535 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.776037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.776070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.776078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.776091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.776100 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.880136 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.880500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.880519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.880544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.880563 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.984077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.984141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.984151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.984165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:51 crc kubenswrapper[4782]: I1124 11:56:51.984176 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:51Z","lastTransitionTime":"2025-11-24T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.086317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.086413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.086431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.086454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.086470 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.189329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.189384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.189396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.189413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.189425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.291769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.291847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.291861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.291878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.291890 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.394581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.394697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.394714 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.394731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.394743 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.490524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:52 crc kubenswrapper[4782]: E1124 11:56:52.490708 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.498089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.498135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.498144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.498159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.498168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.600669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.600720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.600736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.600760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.600774 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.703524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.703645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.703709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.703739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.703761 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.806144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.806205 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.806222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.806244 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.806261 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.909422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.909535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.909557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.909585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:52 crc kubenswrapper[4782]: I1124 11:56:52.909623 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:52Z","lastTransitionTime":"2025-11-24T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.012623 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.012689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.012711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.012739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.012760 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.115424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.115458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.115466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.115499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.115511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.217876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.217911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.217923 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.217937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.217948 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.320565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.320624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.320645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.320670 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.320686 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.423697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.423737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.424067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.424088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.424110 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.490566 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.490647 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:53 crc kubenswrapper[4782]: E1124 11:56:53.490694 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:53 crc kubenswrapper[4782]: E1124 11:56:53.490774 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.490844 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:53 crc kubenswrapper[4782]: E1124 11:56:53.490882 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.526495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.526545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.526554 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.526566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.526575 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.629171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.629436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.629520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.629588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.629651 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.731629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.731666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.731675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.731689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.731697 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.833398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.833431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.833442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.833458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.833469 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.935740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.935774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.935782 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.935795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:53 crc kubenswrapper[4782]: I1124 11:56:53.935802 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:53Z","lastTransitionTime":"2025-11-24T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.039178 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.039249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.039270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.039298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.039321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.142287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.142339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.142356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.142414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.142438 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.245315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.245386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.245401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.245415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.245426 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.348233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.348336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.348363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.348470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.348547 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.451079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.451111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.451120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.451135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.451146 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.490765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:54 crc kubenswrapper[4782]: E1124 11:56:54.491060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.554227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.554289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.554314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.554343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.554365 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.702661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.702718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.702737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.702762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.702786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.805460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.805503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.805515 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.805533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.805545 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.908441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.908472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.908499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.908512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:54 crc kubenswrapper[4782]: I1124 11:56:54.908520 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:54Z","lastTransitionTime":"2025-11-24T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.011408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.011455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.011467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.011484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.011497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.114135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.114179 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.114191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.114207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.114219 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.216605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.216640 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.216648 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.216661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.216670 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.318832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.318872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.318881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.318895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.318904 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.420976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.421023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.421044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.421064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.421076 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.490326 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.490326 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:55 crc kubenswrapper[4782]: E1124 11:56:55.490463 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:55 crc kubenswrapper[4782]: E1124 11:56:55.490574 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.490345 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:55 crc kubenswrapper[4782]: E1124 11:56:55.490663 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.522996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.523031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.523044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.523059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.523070 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.625148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.625197 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.625213 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.625232 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.625244 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.727694 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.727734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.727743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.727975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.728015 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.830068 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.830125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.830141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.830325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.830340 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.932666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.932700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.932710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.932726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:55 crc kubenswrapper[4782]: I1124 11:56:55.932736 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:55Z","lastTransitionTime":"2025-11-24T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.034801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.034840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.034852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.034870 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.034880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.136638 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.136688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.136700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.136713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.136723 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.238676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.238920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.238993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.239066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.239139 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.341443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.341488 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.341500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.341516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.341528 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.443917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.443948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.443958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.443974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.443985 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.490462 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:56 crc kubenswrapper[4782]: E1124 11:56:56.490614 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.546330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.546364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.546392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.546406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.546414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.648588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.648629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.648639 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.648654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.648665 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.750953 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.750987 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.750996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.751011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.751020 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.853479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.853515 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.853527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.853544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.853556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.956335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.956366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.956395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.956408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:56 crc kubenswrapper[4782]: I1124 11:56:56.956417 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:56Z","lastTransitionTime":"2025-11-24T11:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.058667 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.058713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.058724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.058741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.058755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.161347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.161431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.161449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.161471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.161488 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.263589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.263642 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.263654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.263673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.263684 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.365819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.365899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.365916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.365940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.365956 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.468259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.468306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.468317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.468361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.468401 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.491142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.491232 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.491245 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:57 crc kubenswrapper[4782]: E1124 11:56:57.491301 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:57 crc kubenswrapper[4782]: E1124 11:56:57.491492 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:57 crc kubenswrapper[4782]: E1124 11:56:57.491621 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.570737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.570765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.570772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.570784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.570793 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.672866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.673128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.673202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.673273 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.673332 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.776611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.776651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.776660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.776674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.776683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.881014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.881076 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.881097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.881126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.881148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.983583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.983625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.983637 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.983653 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:57 crc kubenswrapper[4782]: I1124 11:56:57.983664 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:57Z","lastTransitionTime":"2025-11-24T11:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.086183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.086226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.086237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.086250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.086259 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.188995 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.189655 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.189764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.189877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.190009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.292233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.292275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.292284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.292298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.292306 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.394991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.395058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.395070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.395104 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.395118 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.490754 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.490932 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.497445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.497580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.497656 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.497745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.497827 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.599749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.599821 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.599834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.599850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.599886 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.702084 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.702159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.702171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.702188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.702222 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.804105 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.804160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.804176 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.804201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.804217 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.888529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.888568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.888577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.888591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.888602 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.902452 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.906518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.906563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.906592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.906611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.906625 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.920571 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.923834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.923875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.923888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.923907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.923922 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.936960 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.940154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.940188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.940196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.940208 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.940217 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.949948 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.952796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.952906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.952979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.953051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.953108 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.964539 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:56:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:56:58 crc kubenswrapper[4782]: E1124 11:56:58.964693 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.965945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.965974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.965985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.965998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:58 crc kubenswrapper[4782]: I1124 11:56:58.966009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:58Z","lastTransitionTime":"2025-11-24T11:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.030689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.030876 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.030962 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:57:31.030937983 +0000 UTC m=+100.274771772 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.068651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.068687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.068703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.068725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.068739 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.171409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.171448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.171460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.171474 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.171483 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.273406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.273440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.273448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.273463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.273487 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.375691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.375905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.375975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.376038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.376106 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.478697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.478918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.479021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.479102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.479176 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.490119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.490194 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.490303 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.490450 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.490119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.490865 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.491054 4782 scope.go:117] "RemoveContainer" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" Nov 24 11:56:59 crc kubenswrapper[4782]: E1124 11:56:59.491183 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.582433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.582473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.582484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.582498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.582510 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.685237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.685286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.685296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.685309 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.685319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.787433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.787468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.787477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.787492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.787503 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.890784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.890848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.890864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.890887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.890905 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.993527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.993570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.993581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.993597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:56:59 crc kubenswrapper[4782]: I1124 11:56:59.993612 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:56:59Z","lastTransitionTime":"2025-11-24T11:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.095604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.095674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.095698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.095727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.095750 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.198524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.198557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.198567 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.198581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.198592 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.300580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.300633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.300642 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.300657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.300670 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.403084 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.403124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.403134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.403148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.403158 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.490262 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:00 crc kubenswrapper[4782]: E1124 11:57:00.490409 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.505234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.505268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.505278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.505294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.505304 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.607730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.607768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.607776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.607792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.607802 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.710056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.710106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.710120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.710139 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.710151 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.812429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.812484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.812496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.812513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.812525 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.911035 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/0.log" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.911078 4782 generic.go:334] "Generic (PLEG): container finished" podID="56de1ffb-9734-4992-b477-591dfae5ad41" containerID="06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f" exitCode=1 Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.911105 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerDied","Data":"06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.911454 4782 scope.go:117] "RemoveContainer" containerID="06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.915007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.915072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.915086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.915101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.915111 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:00Z","lastTransitionTime":"2025-11-24T11:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.924287 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.934319 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.947599 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.959495 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.976675 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:00 crc kubenswrapper[4782]: I1124 11:57:00.988573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.001833 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.012763 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.016937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.016955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.016962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.016975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.016983 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.031331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.048742 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.059960 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.072700 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.082137 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.095966 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.107489 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119156 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119189 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.119286 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.130630 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.221654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.221687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.221695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.221708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.221725 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.323758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.323794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.323803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.323815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.323826 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.426645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.426691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.426708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.426731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.426747 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.492601 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:01 crc kubenswrapper[4782]: E1124 11:57:01.492740 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.493158 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:01 crc kubenswrapper[4782]: E1124 11:57:01.493221 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.493434 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:01 crc kubenswrapper[4782]: E1124 11:57:01.493500 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.512904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.529793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.529859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.529876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.529898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.529916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.531481 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.546965 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.570556 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.581149 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.591479 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.601387 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.611284 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.619787 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.629718 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.632180 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.632208 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.632216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.632229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.632237 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.640917 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.652128 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.663077 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.677435 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.688572 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.700508 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.711152 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.734621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.734735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.734799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.734862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.734926 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.837234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.837271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.837280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.837292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.837305 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.915354 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/0.log" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.915423 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerStarted","Data":"43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.943671 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.952146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.952184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.952194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.952209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.952219 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:01Z","lastTransitionTime":"2025-11-24T11:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.973981 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:01 crc kubenswrapper[4782]: I1124 11:57:01.995363 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:01Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.007039 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.019402 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.030577 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.043994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.052940 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.054241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.054266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.054274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.054287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.054297 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.063910 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.075027 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.084820 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.093579 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.105695 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.114292 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.123155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.132812 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.148339 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.157026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.157062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.157072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.157090 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.157101 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.259360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.259407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.259415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.259427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.259436 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.361991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.362036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.362048 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.362064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.362078 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.463735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.463767 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.463779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.463793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.463805 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.490310 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:02 crc kubenswrapper[4782]: E1124 11:57:02.490462 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.565710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.565788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.565802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.565817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.565829 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.667520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.667569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.667586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.667606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.667622 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.769103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.769132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.769143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.769157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.769168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.871330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.871387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.871400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.871429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.871441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.973493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.973534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.973544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.973558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:02 crc kubenswrapper[4782]: I1124 11:57:02.973568 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:02Z","lastTransitionTime":"2025-11-24T11:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.076174 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.076212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.076222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.076238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.076251 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.178288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.178332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.178344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.178360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.178391 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.280931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.280976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.280984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.280998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.281007 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.387212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.387247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.387255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.387267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.387276 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.488925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.488964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.488974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.488988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.488998 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.490647 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.490675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.490701 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:03 crc kubenswrapper[4782]: E1124 11:57:03.490748 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:03 crc kubenswrapper[4782]: E1124 11:57:03.490825 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:03 crc kubenswrapper[4782]: E1124 11:57:03.490875 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.591316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.591393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.591406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.591423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.591433 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.693845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.693891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.693901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.693915 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.693927 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.795608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.795650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.795660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.795676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.795687 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.897835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.897874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.897882 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.897895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:03 crc kubenswrapper[4782]: I1124 11:57:03.897904 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:03Z","lastTransitionTime":"2025-11-24T11:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.000630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.000671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.000680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.000694 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.000703 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.105460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.105512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.105525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.105546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.105562 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.209036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.209099 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.209117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.209140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.209158 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.311596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.311637 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.311646 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.311662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.311673 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.413512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.413549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.413558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.413573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.413583 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.490394 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:04 crc kubenswrapper[4782]: E1124 11:57:04.490540 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.515290 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.515325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.515335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.515348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.515358 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.617773 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.617819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.617829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.617846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.617856 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.720361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.720411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.720420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.720436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.720445 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.823058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.823091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.823100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.823114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.823123 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.924541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.924578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.924587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.924601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:04 crc kubenswrapper[4782]: I1124 11:57:04.924610 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:04Z","lastTransitionTime":"2025-11-24T11:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.026938 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.026974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.026985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.027001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.027013 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.129338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.129411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.129445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.129470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.129481 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.231913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.231952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.231994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.232011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.232025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.334434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.334480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.334492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.334510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.334522 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.437112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.437155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.437165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.437183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.437195 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.490940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.490980 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:05 crc kubenswrapper[4782]: E1124 11:57:05.491125 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.491142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:05 crc kubenswrapper[4782]: E1124 11:57:05.491227 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:05 crc kubenswrapper[4782]: E1124 11:57:05.491306 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.540645 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.540711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.540723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.540737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.540748 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.643389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.643440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.643450 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.643466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.643476 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.746800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.746855 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.746871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.746893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.746910 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.850226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.850264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.850274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.850289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.850301 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.953092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.953148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.953164 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.953185 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:05 crc kubenswrapper[4782]: I1124 11:57:05.953202 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:05Z","lastTransitionTime":"2025-11-24T11:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.055938 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.055984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.055997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.056013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.056025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.158750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.158815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.158834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.158860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.158883 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.261516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.261585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.261601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.261624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.261640 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.363823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.363862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.363878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.363897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.363910 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.466513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.466558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.466570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.466586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.466601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.489782 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:06 crc kubenswrapper[4782]: E1124 11:57:06.489902 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.568731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.568888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.568910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.568934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.568953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.671678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.671729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.671740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.671762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.671772 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.774726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.774814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.774831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.774852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.774866 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.877647 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.877728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.877764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.877792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.877818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.981107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.981200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.981223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.981244 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:06 crc kubenswrapper[4782]: I1124 11:57:06.981263 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:06Z","lastTransitionTime":"2025-11-24T11:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.084055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.084089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.084100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.084117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.084128 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.186541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.186578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.186606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.186621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.186630 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.289411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.289465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.289478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.289499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.289511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.392355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.392405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.392418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.392434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.392448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.490594 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.490630 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.490696 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:07 crc kubenswrapper[4782]: E1124 11:57:07.490842 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:07 crc kubenswrapper[4782]: E1124 11:57:07.490968 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:07 crc kubenswrapper[4782]: E1124 11:57:07.491110 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.494272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.494306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.494317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.494334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.494346 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.596028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.596079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.596089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.596116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.596126 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.698018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.698061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.698077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.698095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.698106 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.801735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.801800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.801817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.801841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.801859 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.904324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.904367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.904422 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.904437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:07 crc kubenswrapper[4782]: I1124 11:57:07.904448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:07Z","lastTransitionTime":"2025-11-24T11:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.007184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.007224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.007239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.007260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.007276 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.110577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.110628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.110637 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.110651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.110660 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.214909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.214971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.214980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.214998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.215008 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.317798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.317859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.317877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.317903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.317927 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.420467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.420528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.420545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.420570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.420588 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.490486 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:08 crc kubenswrapper[4782]: E1124 11:57:08.490646 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.523281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.523314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.523326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.523341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.523353 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.625903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.625996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.626006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.626019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.626028 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.728304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.728361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.728411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.728435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.728452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.830112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.830171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.830192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.830214 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.830232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.932348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.932412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.932423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.932439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:08 crc kubenswrapper[4782]: I1124 11:57:08.932452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:08Z","lastTransitionTime":"2025-11-24T11:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.035167 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.035228 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.035249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.035276 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.035297 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.138264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.138331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.138353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.138432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.138455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.142725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.142788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.142805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.142827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.142845 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.160854 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.165018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.165071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.165088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.165108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.165125 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.182679 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.187307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.187410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.187431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.187456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.187474 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.205992 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.210242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.210311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.210332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.210357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.210427 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.229227 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.233173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.233242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.233268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.233296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.233321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.252943 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b441a7b0-c6b8-4deb-981a-b7ea6afe0bee\\\",\\\"systemUUID\\\":\\\"71fe61e6-e0a5-43ad-b8ba-cb806e0524a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.253067 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.255311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.255336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.255345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.255362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.255386 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.357703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.357731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.357739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.357750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.357759 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.459973 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.460022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.460039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.460060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.460075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.490814 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.490893 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.490964 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.491170 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.491294 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:09 crc kubenswrapper[4782]: E1124 11:57:09.491362 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.561971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.562005 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.562014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.562027 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.562035 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.664930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.664965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.664975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.665001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.665012 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.767125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.767163 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.767172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.767186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.767195 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.870246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.870304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.870320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.870342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.870360 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.973395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.973437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.973446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.973546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:09 crc kubenswrapper[4782]: I1124 11:57:09.973565 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:09Z","lastTransitionTime":"2025-11-24T11:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.076657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.076710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.076723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.076747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.076759 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.178728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.178770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.178782 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.178797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.178807 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.280790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.280831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.280842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.280857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.280869 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.383262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.383323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.383462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.383489 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.383884 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.488088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.488124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.488135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.488151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.488163 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.489924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:10 crc kubenswrapper[4782]: E1124 11:57:10.490047 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.591437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.591464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.591472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.591484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.591493 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.693971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.694013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.694026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.694043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.694056 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.796599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.796662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.796676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.796693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.796705 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.899430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.899724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.899839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.899955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:10 crc kubenswrapper[4782]: I1124 11:57:10.900064 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:10Z","lastTransitionTime":"2025-11-24T11:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.002393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.002447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.002480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.002498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.002508 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.105060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.105095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.105104 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.105120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.105130 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.206639 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.206672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.206680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.206695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.206706 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.309002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.309162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.309177 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.309197 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.309212 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.411881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.411951 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.411974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.412003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.412028 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.490091 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.490199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.490216 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:11 crc kubenswrapper[4782]: E1124 11:57:11.490316 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:11 crc kubenswrapper[4782]: E1124 11:57:11.490441 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:11 crc kubenswrapper[4782]: E1124 11:57:11.490542 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.491899 4782 scope.go:117] "RemoveContainer" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514140 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.514320 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.536139 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.545539 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.561641 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.576194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.588300 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.598348 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615106 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615912 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.615925 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.628903 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.638848 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.656691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.672160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.686962 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.698243 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.715411 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.717607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.717631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.717640 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.717654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.717663 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.727511 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.740690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.819852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.819883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.819893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.819907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.819917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.921782 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.921815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.921846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.921862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.921894 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:11Z","lastTransitionTime":"2025-11-24T11:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.946768 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/2.log" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.949210 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e"} Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.950245 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.963682 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.976251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:11 crc kubenswrapper[4782]: I1124 11:57:11.986510 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.003222 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.017708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.023721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.023761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.023771 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.023788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.023798 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.028675 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.039270 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.052408 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.061512 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.073091 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.083396 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.095526 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.103744 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.121814 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.125562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.125594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.125604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.125618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.125629 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.136899 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.150203 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.168209 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.227849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.227887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.227897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.227913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.227926 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.330467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.330504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.330513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.330527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.330538 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.432959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.433016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.433032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.433055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.433075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.490702 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:12 crc kubenswrapper[4782]: E1124 11:57:12.490830 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.536418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.536468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.536481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.536499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.536512 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.638144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.638185 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.638195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.638211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.638223 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.740844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.740890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.740902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.740920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.740933 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.843219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.843263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.843274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.843289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.843300 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.946150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.946199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.946210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.946225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.946236 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:12Z","lastTransitionTime":"2025-11-24T11:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.953429 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/3.log" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.954048 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/2.log" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.958477 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" exitCode=1 Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.958643 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e"} Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.958807 4782 scope.go:117] "RemoveContainer" containerID="ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.963203 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:57:12 crc kubenswrapper[4782]: E1124 11:57:12.963971 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:57:12 crc kubenswrapper[4782]: I1124 11:57:12.980228 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.005581 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.022629 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.037014 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.046768 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.049077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.049129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.049147 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.049170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.049187 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.069294 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff21317719c429d008bef6362f5a0cc591b5aa8663b3ea2f3dc2929416763d99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:56:47Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:56:47.267901 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:56:47.267929 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:56:47.267940 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:56:47.267950 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:56:47.267956 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:56:47.267962 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:56:47.267967 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:56:47.267973 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:56:47.268189 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:56:47.268259 6400 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:56:47.268475 6400 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:56:47.268746 6400 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:12Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:57:12.262912 6733 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.081630 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.092949 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.110392 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.127301 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.140646 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.150885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.151030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.151096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.151192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.151257 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.154560 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.171050 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.182447 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.196698 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.211002 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.222922 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.253909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.253996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.254008 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.254040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.254050 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.356236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.356281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.356291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.356313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.356324 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.459255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.459322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.459336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.459359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.459411 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.490209 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:13 crc kubenswrapper[4782]: E1124 11:57:13.490357 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.490611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:13 crc kubenswrapper[4782]: E1124 11:57:13.490709 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.490871 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:13 crc kubenswrapper[4782]: E1124 11:57:13.490937 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.561183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.561464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.561554 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.561679 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.561799 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.664070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.664707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.664933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.665040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.665133 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.767851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.767917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.767937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.767962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.767979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.870959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.871019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.871037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.871059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.871074 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.964208 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/3.log" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.967696 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:57:13 crc kubenswrapper[4782]: E1124 11:57:13.967874 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.977962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.977995 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.978006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.978022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.978032 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:13Z","lastTransitionTime":"2025-11-24T11:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.984875 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3590b880b2a8f104ca53516351936616433f5f46274a30cb60684e367851e21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbdbdb245fb1149b93a95f9affd403dbd45aaca0f0274c4988fe5d955bb9aed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:13 crc kubenswrapper[4782]: I1124 11:57:13.993503 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xrshv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2d328d-ab01-42e7-9583-865d1b5516d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://983cd741be82065707fee3450763c2f75d634d70f337ca2f00e5b095b7c0448a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xrshv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.021953 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1de863b0-02f8-435c-9669-4ea856b352d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:12Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:12Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:57:12.262912 6733 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:57:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zzzxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.035964 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.048118 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c624d59ab6f11868a0f91617a148e88b3c9c4829cb393200c2ab4961d0111c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.059435 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.071887 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fp44f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56de1ffb-9734-4992-b477-591dfae5ad41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:57:00Z\\\",\\\"message\\\":\\\"2025-11-24T11:56:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4\\\\n2025-11-24T11:56:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8a40a1e6-6343-4780-9479-adc609607ad4 to /host/opt/cni/bin/\\\\n2025-11-24T11:56:15Z [verbose] multus-daemon started\\\\n2025-11-24T11:56:15Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:57:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fp44f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.080527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.080571 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.080598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.080618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.080632 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.083747 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a58354-29ed-4cba-8422-4a433c8ec2ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fdb429deb4fa924707e0d9870c78279e90a98a0bcd4bf80b2e527c9b07c3bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bd771e96608feaae68114e00c721f20cd35136e4c9bcd60b80ef312f59ba362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n6ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w5bnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.100022 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd3223b02ec4c8c519eb10a49da31f1efccf442911f56328036106aa199e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.113475 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.125976 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfk7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40dc18-6890-40a2-be2c-f40d806dc39b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1594c9c7ee88b3660459495f80609a9036e71ae46bda965dace9a25e9b8c64f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfk7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.149752 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvr97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e8feb84-86f6-4afe-9563-42016a7cd6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvr97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.164246 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d0eb9f-811d-4264-b8d4-6cff0c475fb2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13350c7e9bbd59f23a860ef8a31e3bdecd595a8268ea2ba62f25570e502448c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dedee4f354333e5c289b897308723e258a3541611463ae8e9079cf4044221e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a399598d7b94ba298c2f72782621d1e9635da2ac0f49198f961d93a6f526c33a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af2dae9c4afdfca5e9571779479b821eab135bd69e9f09d00cc141cd75a1845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.179289 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0413d1f7-6382-4cbd-9156-a7f45724c0ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://649746cfc763723cf9d6026b0226340087be31ce60536e3edb1c093764b2a39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92cb538819a9339197f9b01f028f7223b70aae6629bd0b8ee2d58bb0092074cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b35be455c30fc65977ddbe89a304558255034c12737052b72479a8216eb07c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4609700b04411501266e53e15b8522d50d73107bf7ed6dd8ef6e6fdf568c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.182952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.183001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.183011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.183024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.183033 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.193560 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078c4346-9841-4870-a8b8-de6911b24498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d7fff45f9b3df23bf6df57e7edcfca00f3679b02596d366a1fb0148c5f7abe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fm8qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xg6cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.219770 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-76dcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f32dbad-0f9c-401b-89f2-a5069455e025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5add21bab387562c4194a46df0b0f37ed2d73e9c2b445da908ff800634d0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e73cecf45a06b75e1c9b0ef119b9c759927243dbc4ceeee4f7081628cc5707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5887e1aabfc0facd1e9ff56b6ab52d85773199a4d579356811fe38ae49d54bec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d3091cca48b125a2a43288893e9a5951b90946f71efaf17720ce7f52ae22738\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c314ab3944e435dbf3b705ef779b31e4ea5771a98a783413b253a9cda74c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9507b7ec11295688c17d75108665dc94dbd1b9cfec74ad045177f1dffe903a5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed2df074319b5758afc681a4bc99a940cbcb7d44661e0329b07db09466c1243f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:56:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mk4ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:56:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-76dcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.239672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b80b7a91-ed26-4b31-8c6b-08cc4a550281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ab722de42d2618a86f216e7b3ff37489fca31cd1893e65239144245c3a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b8f259739674153b9760deee1a19d1bba845f83d7bb473dd27e5b239e418b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cce1fa783bd41ffef02bb03995cbe4e612fdc5349c5a2e7f8111998270f4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7774a83d4468f20fa234a4d692a995291a73bb016e4eb9eb1c5ed5572c7adeb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb022bbf5ba39e08fcd7d9c93a56b9a71e4afbd00f0a34648a584f0d89a51ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:56:11Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:56:04.953034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:56:04.956305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1258293389/tls.crt::/tmp/serving-cert-1258293389/tls.key\\\\\\\"\\\\nI1124 11:56:11.292796 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:56:11.303472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:56:11.303510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:56:11.303543 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:56:11.303554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:56:11.316748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:56:11.316809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:56:11.316836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:56:11.316845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1124 11:56:11.316836 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:56:11.316854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:56:11.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:56:11.318339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://517a31764ee355b3e0dcebf30ee6ddf6aa9d013666c64f8c0501dade7f10092e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:55:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbea9206a202c40a93f3f3609013057261d94ca9d6550bacbf328fa60889b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:55:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:55:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:57:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.286465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.286540 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.286562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.286591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.286707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.388845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.388880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.388892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.388906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.388917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.489994 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:14 crc kubenswrapper[4782]: E1124 11:57:14.490214 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.490984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.491006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.491014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.491024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.491034 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.592878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.592926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.592937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.592954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.592965 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.695244 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.695283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.695293 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.695307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.695318 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.798152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.798198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.798229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.798245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.798257 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.899926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.899969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.899978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.899992 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:14 crc kubenswrapper[4782]: I1124 11:57:14.900000 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:14Z","lastTransitionTime":"2025-11-24T11:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.002506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.002548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.002556 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.002570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.002581 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.104606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.104644 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.104652 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.104665 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.104674 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.207211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.207246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.207254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.207266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.207275 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.309852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.309914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.309931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.309955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.309972 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.397424 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.397590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.397603 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.3975821 +0000 UTC m=+148.641415869 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.397670 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.397732 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.397706554 +0000 UTC m=+148.641540323 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.411783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.411817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.411829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.411869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.411882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.490681 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.490742 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.490841 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.490717 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.490976 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.491066 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.498195 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.498236 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.498275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498362 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498401 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498409 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498447 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.498434056 +0000 UTC m=+148.742267825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498362 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498465 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498472 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498490 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.498483898 +0000 UTC m=+148.742317667 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498363 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: E1124 11:57:15.498516 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.498510589 +0000 UTC m=+148.742344358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.513564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.513590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.513599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.513628 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.513636 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.615914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.615962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.615986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.616011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.616026 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.718283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.718323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.718332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.718346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.718356 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.820490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.820538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.820549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.820565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.820576 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.922346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.922398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.922406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.922418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:15 crc kubenswrapper[4782]: I1124 11:57:15.922428 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:15Z","lastTransitionTime":"2025-11-24T11:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.025006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.025073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.025095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.025124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.025143 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.127436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.127483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.127492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.127508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.127519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.230141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.230177 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.230186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.230198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.230210 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.332940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.332978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.332989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.333004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.333016 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.435885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.435928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.435937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.435951 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.435961 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.490770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:16 crc kubenswrapper[4782]: E1124 11:57:16.491038 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.538798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.538852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.538866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.538887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.538904 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.640605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.640639 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.640647 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.640659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.640668 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.742616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.742658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.742669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.742684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.742694 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.845576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.845623 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.845635 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.845652 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.845664 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.948312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.948352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.948398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.948415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:16 crc kubenswrapper[4782]: I1124 11:57:16.948427 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:16Z","lastTransitionTime":"2025-11-24T11:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.050790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.050826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.050837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.050850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.050860 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.152769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.152810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.152842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.152857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.152867 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.254857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.254889 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.254897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.254910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.254918 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.358140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.358173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.358184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.358197 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.358208 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.462058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.462123 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.462135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.462158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.462179 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.491610 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.491654 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.491684 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:17 crc kubenswrapper[4782]: E1124 11:57:17.491832 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:17 crc kubenswrapper[4782]: E1124 11:57:17.491964 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:17 crc kubenswrapper[4782]: E1124 11:57:17.492067 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.564159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.564195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.564204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.564217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.564226 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.667646 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.667691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.667701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.667720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.667730 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.770445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.770486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.770496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.770511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.770522 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.873122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.873660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.873673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.873731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.873750 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.976778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.976812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.976825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.976842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:17 crc kubenswrapper[4782]: I1124 11:57:17.976854 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:17Z","lastTransitionTime":"2025-11-24T11:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.079768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.079806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.079817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.079832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.079843 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.182434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.182478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.182496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.182514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.182530 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.284510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.284539 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.284552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.284566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.284574 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.386728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.386777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.386794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.386818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.386839 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.488906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.488949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.488959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.488974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.488987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.489812 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:18 crc kubenswrapper[4782]: E1124 11:57:18.489976 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.591272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.591324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.591335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.591352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.591362 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.695310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.695357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.695368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.695393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.695406 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.798146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.798183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.798192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.798207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.798219 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.900890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.900935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.900949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.900970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:18 crc kubenswrapper[4782]: I1124 11:57:18.900981 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:18Z","lastTransitionTime":"2025-11-24T11:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.003577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.003620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.003633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.003652 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.003665 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.105393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.105443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.105455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.105469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.105480 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.207601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.207635 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.207646 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.207661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.207671 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.309966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.310004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.310017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.310041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.310054 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.412962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.413009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.413019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.413033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.413043 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.490225 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.490307 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:19 crc kubenswrapper[4782]: E1124 11:57:19.490398 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.490227 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:19 crc kubenswrapper[4782]: E1124 11:57:19.490624 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:19 crc kubenswrapper[4782]: E1124 11:57:19.490724 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.514886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.514925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.514955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.514969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.514977 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.583976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.584022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.584033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.584050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.584068 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:57:19Z","lastTransitionTime":"2025-11-24T11:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.634037 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82"] Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.634560 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.637866 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.638206 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.638429 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.638649 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.658645 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.658626384 podStartE2EDuration="38.658626384s" podCreationTimestamp="2025-11-24 11:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.658503831 +0000 UTC m=+88.902337630" watchObservedRunningTime="2025-11-24 11:57:19.658626384 +0000 UTC m=+88.902460153" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.697723 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xfk7b" podStartSLOduration=67.697705763 podStartE2EDuration="1m7.697705763s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.697214608 +0000 UTC m=+88.941048377" watchObservedRunningTime="2025-11-24 11:57:19.697705763 +0000 UTC m=+88.941539552" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.725045 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.725029224 podStartE2EDuration="1m8.725029224s" podCreationTimestamp="2025-11-24 11:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.724014573 +0000 UTC m=+88.967848362" watchObservedRunningTime="2025-11-24 11:57:19.725029224 +0000 UTC m=+88.968862993" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.738745 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.73872927 podStartE2EDuration="1m8.73872927s" podCreationTimestamp="2025-11-24 11:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.738332128 +0000 UTC m=+88.982165917" watchObservedRunningTime="2025-11-24 11:57:19.73872927 +0000 UTC m=+88.982563059" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.747159 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.747518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.747624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/183d9a17-870e-47f0-a0f1-f9f87251da73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.747732 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183d9a17-870e-47f0-a0f1-f9f87251da73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.747818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/183d9a17-870e-47f0-a0f1-f9f87251da73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.749902 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podStartSLOduration=67.74989249 podStartE2EDuration="1m7.74989249s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.74923341 +0000 UTC m=+88.993067189" watchObservedRunningTime="2025-11-24 11:57:19.74989249 +0000 UTC m=+88.993726259" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.778917 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-76dcq" podStartSLOduration=67.778900442 podStartE2EDuration="1m7.778900442s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.76635618 +0000 UTC m=+89.010189959" watchObservedRunningTime="2025-11-24 11:57:19.778900442 +0000 UTC m=+89.022734211" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.804997 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xrshv" podStartSLOduration=67.804980425 podStartE2EDuration="1m7.804980425s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.804727107 +0000 UTC m=+89.048560896" watchObservedRunningTime="2025-11-24 11:57:19.804980425 +0000 UTC m=+89.048814194" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.848893 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/183d9a17-870e-47f0-a0f1-f9f87251da73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849176 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/183d9a17-870e-47f0-a0f1-f9f87251da73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849241 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183d9a17-870e-47f0-a0f1-f9f87251da73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.849252 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/183d9a17-870e-47f0-a0f1-f9f87251da73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.850420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/183d9a17-870e-47f0-a0f1-f9f87251da73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.854832 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183d9a17-870e-47f0-a0f1-f9f87251da73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.877887 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/183d9a17-870e-47f0-a0f1-f9f87251da73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kfm82\" (UID: \"183d9a17-870e-47f0-a0f1-f9f87251da73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.910180 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fp44f" podStartSLOduration=67.910163733 podStartE2EDuration="1m7.910163733s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.899859 +0000 UTC m=+89.143692769" watchObservedRunningTime="2025-11-24 11:57:19.910163733 +0000 UTC m=+89.153997502" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.910490 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w5bnh" podStartSLOduration=66.910486863 podStartE2EDuration="1m6.910486863s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:19.909830603 +0000 UTC m=+89.153664372" watchObservedRunningTime="2025-11-24 11:57:19.910486863 +0000 UTC m=+89.154320632" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.951121 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" Nov 24 11:57:19 crc kubenswrapper[4782]: I1124 11:57:19.990418 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" event={"ID":"183d9a17-870e-47f0-a0f1-f9f87251da73","Type":"ContainerStarted","Data":"62b1cecb526d41674d401b39daa1b69eb56dd191d5b8bac51562e874d98d9132"} Nov 24 11:57:20 crc kubenswrapper[4782]: I1124 11:57:20.490485 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:20 crc kubenswrapper[4782]: E1124 11:57:20.490640 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:20 crc kubenswrapper[4782]: I1124 11:57:20.995226 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" event={"ID":"183d9a17-870e-47f0-a0f1-f9f87251da73","Type":"ContainerStarted","Data":"23a78cc16ac5027cb4a724347e06009a933e3ed1802e3bb482463698d83d5207"} Nov 24 11:57:21 crc kubenswrapper[4782]: I1124 11:57:21.010358 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kfm82" podStartSLOduration=69.010336196 podStartE2EDuration="1m9.010336196s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:21.009439079 +0000 UTC m=+90.253272898" watchObservedRunningTime="2025-11-24 11:57:21.010336196 +0000 UTC m=+90.254169985" Nov 24 11:57:21 crc kubenswrapper[4782]: I1124 11:57:21.489915 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:21 crc kubenswrapper[4782]: I1124 11:57:21.489940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:21 crc kubenswrapper[4782]: I1124 11:57:21.489967 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:21 crc kubenswrapper[4782]: E1124 11:57:21.491017 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:21 crc kubenswrapper[4782]: E1124 11:57:21.491091 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:21 crc kubenswrapper[4782]: E1124 11:57:21.491160 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:22 crc kubenswrapper[4782]: I1124 11:57:22.490056 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:22 crc kubenswrapper[4782]: E1124 11:57:22.490462 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:23 crc kubenswrapper[4782]: I1124 11:57:23.490329 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:23 crc kubenswrapper[4782]: I1124 11:57:23.490528 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:23 crc kubenswrapper[4782]: E1124 11:57:23.490619 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:23 crc kubenswrapper[4782]: I1124 11:57:23.490747 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:23 crc kubenswrapper[4782]: E1124 11:57:23.490926 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:23 crc kubenswrapper[4782]: E1124 11:57:23.491019 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:24 crc kubenswrapper[4782]: I1124 11:57:24.490573 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:24 crc kubenswrapper[4782]: E1124 11:57:24.490739 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:25 crc kubenswrapper[4782]: I1124 11:57:25.490684 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:25 crc kubenswrapper[4782]: E1124 11:57:25.490803 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:25 crc kubenswrapper[4782]: I1124 11:57:25.491492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:25 crc kubenswrapper[4782]: I1124 11:57:25.491646 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:57:25 crc kubenswrapper[4782]: I1124 11:57:25.491672 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:25 crc kubenswrapper[4782]: E1124 11:57:25.491860 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:57:25 crc kubenswrapper[4782]: E1124 11:57:25.492022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:25 crc kubenswrapper[4782]: E1124 11:57:25.492114 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:25 crc kubenswrapper[4782]: I1124 11:57:25.502626 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 11:57:26 crc kubenswrapper[4782]: I1124 11:57:26.490074 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:26 crc kubenswrapper[4782]: E1124 11:57:26.490568 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:27 crc kubenswrapper[4782]: I1124 11:57:27.490274 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:27 crc kubenswrapper[4782]: I1124 11:57:27.490315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:27 crc kubenswrapper[4782]: E1124 11:57:27.490418 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:27 crc kubenswrapper[4782]: I1124 11:57:27.490436 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:27 crc kubenswrapper[4782]: E1124 11:57:27.490535 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:27 crc kubenswrapper[4782]: E1124 11:57:27.490628 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:28 crc kubenswrapper[4782]: I1124 11:57:28.491260 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:28 crc kubenswrapper[4782]: E1124 11:57:28.491938 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:28 crc kubenswrapper[4782]: I1124 11:57:28.508199 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 11:57:29 crc kubenswrapper[4782]: I1124 11:57:29.490168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:29 crc kubenswrapper[4782]: I1124 11:57:29.490256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:29 crc kubenswrapper[4782]: I1124 11:57:29.490264 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:29 crc kubenswrapper[4782]: E1124 11:57:29.490939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:29 crc kubenswrapper[4782]: E1124 11:57:29.490987 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:29 crc kubenswrapper[4782]: E1124 11:57:29.491038 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:30 crc kubenswrapper[4782]: I1124 11:57:30.490517 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:30 crc kubenswrapper[4782]: E1124 11:57:30.490672 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:31 crc kubenswrapper[4782]: I1124 11:57:31.065806 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:31 crc kubenswrapper[4782]: E1124 11:57:31.065955 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:57:31 crc kubenswrapper[4782]: E1124 11:57:31.066040 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs podName:1e8feb84-86f6-4afe-9563-42016a7cd6ca nodeName:}" failed. No retries permitted until 2025-11-24 11:58:35.066016408 +0000 UTC m=+164.309850187 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs") pod "network-metrics-daemon-fvr97" (UID: "1e8feb84-86f6-4afe-9563-42016a7cd6ca") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:57:31 crc kubenswrapper[4782]: I1124 11:57:31.490776 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:31 crc kubenswrapper[4782]: I1124 11:57:31.490800 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:31 crc kubenswrapper[4782]: I1124 11:57:31.490804 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:31 crc kubenswrapper[4782]: E1124 11:57:31.492030 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:31 crc kubenswrapper[4782]: E1124 11:57:31.492150 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:31 crc kubenswrapper[4782]: E1124 11:57:31.492093 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:31 crc kubenswrapper[4782]: I1124 11:57:31.503283 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.503268423 podStartE2EDuration="6.503268423s" podCreationTimestamp="2025-11-24 11:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:31.502194841 +0000 UTC m=+100.746028610" watchObservedRunningTime="2025-11-24 11:57:31.503268423 +0000 UTC m=+100.747102192" Nov 24 11:57:32 crc kubenswrapper[4782]: I1124 11:57:32.490337 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:32 crc kubenswrapper[4782]: E1124 11:57:32.490802 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:33 crc kubenswrapper[4782]: I1124 11:57:33.490105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:33 crc kubenswrapper[4782]: E1124 11:57:33.490278 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:33 crc kubenswrapper[4782]: I1124 11:57:33.490654 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:33 crc kubenswrapper[4782]: I1124 11:57:33.490816 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:33 crc kubenswrapper[4782]: E1124 11:57:33.490933 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:33 crc kubenswrapper[4782]: E1124 11:57:33.490819 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:34 crc kubenswrapper[4782]: I1124 11:57:34.490009 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:34 crc kubenswrapper[4782]: E1124 11:57:34.490231 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:35 crc kubenswrapper[4782]: I1124 11:57:35.490526 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:35 crc kubenswrapper[4782]: I1124 11:57:35.490623 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:35 crc kubenswrapper[4782]: E1124 11:57:35.490674 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:35 crc kubenswrapper[4782]: E1124 11:57:35.490798 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:35 crc kubenswrapper[4782]: I1124 11:57:35.490886 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:35 crc kubenswrapper[4782]: E1124 11:57:35.491012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:36 crc kubenswrapper[4782]: I1124 11:57:36.490614 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:36 crc kubenswrapper[4782]: E1124 11:57:36.491273 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:37 crc kubenswrapper[4782]: I1124 11:57:37.490696 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:37 crc kubenswrapper[4782]: I1124 11:57:37.490795 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:37 crc kubenswrapper[4782]: E1124 11:57:37.491636 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:37 crc kubenswrapper[4782]: I1124 11:57:37.490972 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:37 crc kubenswrapper[4782]: E1124 11:57:37.491776 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:37 crc kubenswrapper[4782]: E1124 11:57:37.492548 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:38 crc kubenswrapper[4782]: I1124 11:57:38.490501 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:38 crc kubenswrapper[4782]: E1124 11:57:38.491010 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:39 crc kubenswrapper[4782]: I1124 11:57:39.490673 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:39 crc kubenswrapper[4782]: I1124 11:57:39.490711 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:39 crc kubenswrapper[4782]: E1124 11:57:39.490941 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:39 crc kubenswrapper[4782]: I1124 11:57:39.490963 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:39 crc kubenswrapper[4782]: E1124 11:57:39.491096 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:39 crc kubenswrapper[4782]: E1124 11:57:39.491234 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:40 crc kubenswrapper[4782]: I1124 11:57:40.490921 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:40 crc kubenswrapper[4782]: E1124 11:57:40.491349 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:40 crc kubenswrapper[4782]: I1124 11:57:40.492705 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:57:40 crc kubenswrapper[4782]: E1124 11:57:40.493006 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:57:41 crc kubenswrapper[4782]: I1124 11:57:41.489970 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:41 crc kubenswrapper[4782]: I1124 11:57:41.490618 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:41 crc kubenswrapper[4782]: I1124 11:57:41.490909 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:41 crc kubenswrapper[4782]: E1124 11:57:41.490890 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:41 crc kubenswrapper[4782]: E1124 11:57:41.492340 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:41 crc kubenswrapper[4782]: E1124 11:57:41.492506 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:42 crc kubenswrapper[4782]: I1124 11:57:42.490604 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:42 crc kubenswrapper[4782]: E1124 11:57:42.490744 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:43 crc kubenswrapper[4782]: I1124 11:57:43.490124 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:43 crc kubenswrapper[4782]: I1124 11:57:43.490163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:43 crc kubenswrapper[4782]: E1124 11:57:43.490408 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:43 crc kubenswrapper[4782]: I1124 11:57:43.490485 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:43 crc kubenswrapper[4782]: E1124 11:57:43.490661 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:43 crc kubenswrapper[4782]: E1124 11:57:43.490810 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:44 crc kubenswrapper[4782]: I1124 11:57:44.489985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:44 crc kubenswrapper[4782]: E1124 11:57:44.490144 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:45 crc kubenswrapper[4782]: I1124 11:57:45.490123 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:45 crc kubenswrapper[4782]: I1124 11:57:45.490183 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:45 crc kubenswrapper[4782]: E1124 11:57:45.490308 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:45 crc kubenswrapper[4782]: E1124 11:57:45.490634 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:45 crc kubenswrapper[4782]: I1124 11:57:45.490679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:45 crc kubenswrapper[4782]: E1124 11:57:45.490830 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:46 crc kubenswrapper[4782]: I1124 11:57:46.490705 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:46 crc kubenswrapper[4782]: E1124 11:57:46.491489 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.088257 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/1.log" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.088954 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/0.log" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.089022 4782 generic.go:334] "Generic (PLEG): container finished" podID="56de1ffb-9734-4992-b477-591dfae5ad41" containerID="43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8" exitCode=1 Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.089131 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerDied","Data":"43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8"} Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.089282 4782 scope.go:117] "RemoveContainer" containerID="06f776a91bf582dea4ca4e55295d4c588234ad8d910e2db5501f7844bfac418f" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.089847 4782 scope.go:117] "RemoveContainer" containerID="43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8" Nov 24 11:57:47 crc kubenswrapper[4782]: E1124 11:57:47.090129 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fp44f_openshift-multus(56de1ffb-9734-4992-b477-591dfae5ad41)\"" pod="openshift-multus/multus-fp44f" podUID="56de1ffb-9734-4992-b477-591dfae5ad41" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.107143 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.107118125 podStartE2EDuration="19.107118125s" podCreationTimestamp="2025-11-24 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:57:31.523063375 +0000 UTC m=+100.766897154" watchObservedRunningTime="2025-11-24 11:57:47.107118125 +0000 UTC m=+116.350951904" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.490215 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.490277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:47 crc kubenswrapper[4782]: I1124 11:57:47.490298 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:47 crc kubenswrapper[4782]: E1124 11:57:47.490469 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:47 crc kubenswrapper[4782]: E1124 11:57:47.490547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:47 crc kubenswrapper[4782]: E1124 11:57:47.490627 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:48 crc kubenswrapper[4782]: I1124 11:57:48.094697 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/1.log" Nov 24 11:57:48 crc kubenswrapper[4782]: I1124 11:57:48.490234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:48 crc kubenswrapper[4782]: E1124 11:57:48.490415 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:49 crc kubenswrapper[4782]: I1124 11:57:49.490770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:49 crc kubenswrapper[4782]: I1124 11:57:49.490811 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:49 crc kubenswrapper[4782]: I1124 11:57:49.490895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:49 crc kubenswrapper[4782]: E1124 11:57:49.490962 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:49 crc kubenswrapper[4782]: E1124 11:57:49.491287 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:49 crc kubenswrapper[4782]: E1124 11:57:49.491151 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:50 crc kubenswrapper[4782]: I1124 11:57:50.490659 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:50 crc kubenswrapper[4782]: E1124 11:57:50.490875 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:51 crc kubenswrapper[4782]: I1124 11:57:51.490597 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:51 crc kubenswrapper[4782]: I1124 11:57:51.490687 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:51 crc kubenswrapper[4782]: I1124 11:57:51.490766 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.492353 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.492486 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.492428 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:51 crc kubenswrapper[4782]: I1124 11:57:51.492867 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.493080 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zzzxx_openshift-ovn-kubernetes(1de863b0-02f8-435c-9669-4ea856b352d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.534324 4782 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 11:57:51 crc kubenswrapper[4782]: E1124 11:57:51.566236 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:57:52 crc kubenswrapper[4782]: I1124 11:57:52.490755 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:52 crc kubenswrapper[4782]: E1124 11:57:52.490889 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:53 crc kubenswrapper[4782]: I1124 11:57:53.490924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:53 crc kubenswrapper[4782]: I1124 11:57:53.490924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:53 crc kubenswrapper[4782]: E1124 11:57:53.491159 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:53 crc kubenswrapper[4782]: E1124 11:57:53.491207 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:53 crc kubenswrapper[4782]: I1124 11:57:53.490944 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:53 crc kubenswrapper[4782]: E1124 11:57:53.491357 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:54 crc kubenswrapper[4782]: I1124 11:57:54.490599 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:54 crc kubenswrapper[4782]: E1124 11:57:54.490961 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:55 crc kubenswrapper[4782]: I1124 11:57:55.490641 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:55 crc kubenswrapper[4782]: I1124 11:57:55.490646 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:55 crc kubenswrapper[4782]: E1124 11:57:55.490917 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:55 crc kubenswrapper[4782]: E1124 11:57:55.491050 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:55 crc kubenswrapper[4782]: I1124 11:57:55.491510 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:55 crc kubenswrapper[4782]: E1124 11:57:55.491625 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:56 crc kubenswrapper[4782]: I1124 11:57:56.490853 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:56 crc kubenswrapper[4782]: E1124 11:57:56.491066 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:56 crc kubenswrapper[4782]: E1124 11:57:56.567224 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:57:57 crc kubenswrapper[4782]: I1124 11:57:57.490327 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:57 crc kubenswrapper[4782]: I1124 11:57:57.490484 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:57 crc kubenswrapper[4782]: E1124 11:57:57.491158 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:57 crc kubenswrapper[4782]: I1124 11:57:57.491246 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:57 crc kubenswrapper[4782]: E1124 11:57:57.491545 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:57:57 crc kubenswrapper[4782]: E1124 11:57:57.491970 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:58 crc kubenswrapper[4782]: I1124 11:57:58.490686 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:57:58 crc kubenswrapper[4782]: E1124 11:57:58.491077 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:57:58 crc kubenswrapper[4782]: I1124 11:57:58.491225 4782 scope.go:117] "RemoveContainer" containerID="43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8" Nov 24 11:57:59 crc kubenswrapper[4782]: I1124 11:57:59.136688 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/1.log" Nov 24 11:57:59 crc kubenswrapper[4782]: I1124 11:57:59.137235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerStarted","Data":"d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd"} Nov 24 11:57:59 crc kubenswrapper[4782]: I1124 11:57:59.490647 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:57:59 crc kubenswrapper[4782]: I1124 11:57:59.490777 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:57:59 crc kubenswrapper[4782]: E1124 11:57:59.490848 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:57:59 crc kubenswrapper[4782]: I1124 11:57:59.490685 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:57:59 crc kubenswrapper[4782]: E1124 11:57:59.490985 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:57:59 crc kubenswrapper[4782]: E1124 11:57:59.491137 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:58:00 crc kubenswrapper[4782]: I1124 11:58:00.489974 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:00 crc kubenswrapper[4782]: E1124 11:58:00.490118 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:58:01 crc kubenswrapper[4782]: I1124 11:58:01.490726 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:01 crc kubenswrapper[4782]: E1124 11:58:01.491671 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:58:01 crc kubenswrapper[4782]: I1124 11:58:01.491750 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:01 crc kubenswrapper[4782]: I1124 11:58:01.491784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:01 crc kubenswrapper[4782]: E1124 11:58:01.491916 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:58:01 crc kubenswrapper[4782]: E1124 11:58:01.491985 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:58:01 crc kubenswrapper[4782]: E1124 11:58:01.568153 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:58:02 crc kubenswrapper[4782]: I1124 11:58:02.490506 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:02 crc kubenswrapper[4782]: E1124 11:58:02.491055 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:58:02 crc kubenswrapper[4782]: I1124 11:58:02.491155 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.149898 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/3.log" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.151761 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerStarted","Data":"54b478eb651af6a52102f381e792521d9775fc8fad54899e20bb47909f65f992"} Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.152150 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.178240 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podStartSLOduration=111.178225957 podStartE2EDuration="1m51.178225957s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:03.176316359 +0000 UTC m=+132.420150128" watchObservedRunningTime="2025-11-24 11:58:03.178225957 +0000 UTC m=+132.422059726" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.435336 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fvr97"] Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.435906 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:03 crc kubenswrapper[4782]: E1124 11:58:03.436180 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.496062 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:03 crc kubenswrapper[4782]: E1124 11:58:03.496206 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.496412 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:03 crc kubenswrapper[4782]: E1124 11:58:03.496478 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:58:03 crc kubenswrapper[4782]: I1124 11:58:03.496690 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:03 crc kubenswrapper[4782]: E1124 11:58:03.496753 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:58:05 crc kubenswrapper[4782]: I1124 11:58:05.490473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:05 crc kubenswrapper[4782]: I1124 11:58:05.490509 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:05 crc kubenswrapper[4782]: I1124 11:58:05.490745 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:05 crc kubenswrapper[4782]: E1124 11:58:05.491237 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:58:05 crc kubenswrapper[4782]: E1124 11:58:05.491293 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:58:05 crc kubenswrapper[4782]: E1124 11:58:05.490948 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:58:05 crc kubenswrapper[4782]: I1124 11:58:05.490765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:05 crc kubenswrapper[4782]: E1124 11:58:05.491420 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvr97" podUID="1e8feb84-86f6-4afe-9563-42016a7cd6ca" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.490430 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.490752 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.490862 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.491031 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.495775 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.495775 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.497478 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.498126 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.499737 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 11:58:07 crc kubenswrapper[4782]: I1124 11:58:07.500929 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.008626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.067508 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.068353 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.069841 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2lckb"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.070675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.073722 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.074868 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.079668 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.079864 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.080490 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.080615 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.080731 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.080807 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkg9c"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.079677 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jxc\" (UniqueName: \"kubernetes.io/projected/32cd8481-2e97-4cd1-9ef6-889d11defb32-kube-api-access-m4jxc\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.080916 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081146 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-policies\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081165 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-encryption-config\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32cd8481-2e97-4cd1-9ef6-889d11defb32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081303 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cd8481-2e97-4cd1-9ef6-889d11defb32-serving-cert\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081347 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-serving-cert\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081469 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxps5\" (UniqueName: \"kubernetes.io/projected/59b13b1d-d00e-439e-ba63-29a792d3dbf6-kube-api-access-mxps5\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081554 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081605 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081675 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhtv\" (UniqueName: \"kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-client\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081758 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-dir\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.081816 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.082323 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.085581 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086429 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086486 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086722 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086848 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086927 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.086971 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087085 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087347 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087789 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087854 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087939 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.087996 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.088419 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.091784 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.092289 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.106367 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bhx6w"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.107135 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.119432 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kqdns"] Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119595 4782 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119641 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119708 4782 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119724 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119768 4782 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119781 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119820 4782 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119833 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119872 4782 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119884 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.119911 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49"] Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119923 4782 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119939 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.119976 4782 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.119991 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.120048 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: secrets "v4-0-config-system-session" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.120061 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.120195 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.120582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.120746 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.121943 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.123436 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mw86p"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.124134 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.126837 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.126897 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.126963 4782 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: secrets "console-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.126984 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.127037 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.127735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.133536 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.142833 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.143847 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.144558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.173925 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184436 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-svpdv"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184674 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cd8481-2e97-4cd1-9ef6-889d11defb32-serving-cert\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-serving-cert\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184763 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxps5\" (UniqueName: \"kubernetes.io/projected/59b13b1d-d00e-439e-ba63-29a792d3dbf6-kube-api-access-mxps5\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184790 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184834 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184872 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhtv\" (UniqueName: \"kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-client\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184934 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-dir\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184947 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.184975 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jxc\" (UniqueName: \"kubernetes.io/projected/32cd8481-2e97-4cd1-9ef6-889d11defb32-kube-api-access-m4jxc\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185065 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-policies\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-encryption-config\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185111 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32cd8481-2e97-4cd1-9ef6-889d11defb32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185356 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185543 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32cd8481-2e97-4cd1-9ef6-889d11defb32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185633 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.185691 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-dir\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.186305 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.189629 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.190552 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59b13b1d-d00e-439e-ba63-29a792d3dbf6-audit-policies\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.191018 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.191957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.199553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.199906 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cd8481-2e97-4cd1-9ef6-889d11defb32-serving-cert\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.201853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-serving-cert\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.202962 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-encryption-config\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.203913 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59b13b1d-d00e-439e-ba63-29a792d3dbf6-etcd-client\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228520 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228561 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228576 4782 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: configmaps "audit" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228615 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228629 4782 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: secrets "console-dockercfg-f62pw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228627 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228641 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-dockercfg-f62pw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228643 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228680 4782 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228690 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228699 4782 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: secrets "console-oauth-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228725 4782 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228733 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228729 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-oauth-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228742 4782 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228758 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.228771 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228782 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228743 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.228737 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.228939 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.229045 4782 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.229061 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.229485 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: secrets "v4-0-config-user-idp-0-file-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.229583 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.234410 4782 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: configmaps "console-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.234452 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.234560 4782 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: configmaps "service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.234576 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.234607 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.234642 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239090 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.239329 4782 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.239360 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239555 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239587 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239717 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239746 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.239835 4782 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.239861 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.239906 4782 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.239918 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.239943 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240008 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240107 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240199 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240260 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240299 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240318 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240261 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240124 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240116 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.240853 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.241056 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.241238 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.241403 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.242881 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.243135 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.243294 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.243555 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.243688 4782 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 11:58:11 crc kubenswrapper[4782]: E1124 11:58:11.243715 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.243765 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.243951 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244071 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244153 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244302 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244300 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244417 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244726 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244822 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.244890 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.245235 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.245320 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.245420 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.248418 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhtv\" (UniqueName: \"kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv\") pod \"route-controller-manager-6576b87f9c-z8p62\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.251600 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.251842 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.251957 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.252090 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.252744 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.253679 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.254173 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.259506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.259894 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.259941 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.260064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.260080 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.261354 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262012 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262166 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262272 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262274 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262876 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.262388 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.264607 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.265159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.265629 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hpsn"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.266279 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.275359 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.275690 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.276466 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.280424 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-b6596"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.281745 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.282111 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.282335 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.285961 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.285999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286025 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286048 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dada1d-e829-4a4b-804e-2fac2553dbc4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286076 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqfx\" (UniqueName: \"kubernetes.io/projected/79cdc549-a93f-4a59-8f97-4183fe762ce2-kube-api-access-7pqfx\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6cxj\" (UniqueName: \"kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286132 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-trusted-ca\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286195 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286238 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndj55\" (UniqueName: \"kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286259 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7js5z\" (UniqueName: \"kubernetes.io/projected/05464aa5-d507-4bdb-9f21-1de746c2e4ba-kube-api-access-7js5z\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286301 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b855f8b-c1a4-4ab4-8400-c4b93a831025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-audit-dir\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79cdc549-a93f-4a59-8f97-4183fe762ce2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286487 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dada1d-e829-4a4b-804e-2fac2553dbc4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286687 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286755 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-node-pullsecrets\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286815 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nflx\" (UniqueName: \"kubernetes.io/projected/9023f41f-9e84-42aa-ae26-da378cf12eba-kube-api-access-6nflx\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286877 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.286900 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287256 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287323 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84edfa0-ff1b-4a50-9019-b340b41b9f53-serving-cert\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287346 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287403 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckf2\" (UniqueName: \"kubernetes.io/projected/38087238-7cf9-4f55-9c71-f18caa92ec78-kube-api-access-6ckf2\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287448 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvph7\" (UniqueName: \"kubernetes.io/projected/9babc041-e14e-4226-aebc-50e771089c3c-kube-api-access-zvph7\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287581 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287639 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-config\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-service-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2jh\" (UniqueName: \"kubernetes.io/projected/b84edfa0-ff1b-4a50-9019-b340b41b9f53-kube-api-access-wl2jh\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-images\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287766 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06b66dd7-ceae-4692-a6c6-85102bc27717-serving-cert\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8sm9\" (UniqueName: \"kubernetes.io/projected/80dada1d-e829-4a4b-804e-2fac2553dbc4-kube-api-access-f8sm9\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.287806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-config\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288132 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288148 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzwx4\" (UniqueName: \"kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288230 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-client\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288271 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288288 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288337 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9023f41f-9e84-42aa-ae26-da378cf12eba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288383 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288402 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288462 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288532 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b855f8b-c1a4-4ab4-8400-c4b93a831025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288578 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9023f41f-9e84-42aa-ae26-da378cf12eba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288595 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05464aa5-d507-4bdb-9f21-1de746c2e4ba-machine-approver-tls\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288646 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9babc041-e14e-4226-aebc-50e771089c3c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288677 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288691 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-serving-cert\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-auth-proxy-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288725 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28s5\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-kube-api-access-f28s5\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7z2\" (UniqueName: \"kubernetes.io/projected/06b66dd7-ceae-4692-a6c6-85102bc27717-kube-api-access-jl7z2\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288826 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-config\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.288841 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.291254 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.291866 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.296646 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.297152 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.303638 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.313839 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.315058 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.327153 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8shpq"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.329980 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.351146 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.353146 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.353638 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.354068 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.354843 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.357514 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-49vlc"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.357899 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-csb8m"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.358149 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.363853 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.364098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.364261 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.365220 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.367519 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.368214 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.369934 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.370016 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxps5\" (UniqueName: \"kubernetes.io/projected/59b13b1d-d00e-439e-ba63-29a792d3dbf6-kube-api-access-mxps5\") pod \"apiserver-7bbb656c7d-52qsf\" (UID: \"59b13b1d-d00e-439e-ba63-29a792d3dbf6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.375992 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.376699 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.377061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.377283 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.379234 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jxc\" (UniqueName: \"kubernetes.io/projected/32cd8481-2e97-4cd1-9ef6-889d11defb32-kube-api-access-m4jxc\") pod \"openshift-config-operator-7777fb866f-2lckb\" (UID: \"32cd8481-2e97-4cd1-9ef6-889d11defb32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.384859 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.385491 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.387476 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.387964 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.388161 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5k47f"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.388691 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389259 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389360 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84edfa0-ff1b-4a50-9019-b340b41b9f53-serving-cert\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckf2\" (UniqueName: \"kubernetes.io/projected/38087238-7cf9-4f55-9c71-f18caa92ec78-kube-api-access-6ckf2\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvph7\" (UniqueName: \"kubernetes.io/projected/9babc041-e14e-4226-aebc-50e771089c3c-kube-api-access-zvph7\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389869 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-config\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.389953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-service-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390023 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl2jh\" (UniqueName: \"kubernetes.io/projected/b84edfa0-ff1b-4a50-9019-b340b41b9f53-kube-api-access-wl2jh\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06b66dd7-ceae-4692-a6c6-85102bc27717-serving-cert\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390177 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8sm9\" (UniqueName: \"kubernetes.io/projected/80dada1d-e829-4a4b-804e-2fac2553dbc4-kube-api-access-f8sm9\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390274 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqqj\" (UniqueName: \"kubernetes.io/projected/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-kube-api-access-dhqqj\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-images\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390471 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7515d5c-f17a-43b1-a70a-869c7fbf7388-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-config\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390760 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390834 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390903 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzwx4\" (UniqueName: \"kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.390983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-client\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.391058 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7515d5c-f17a-43b1-a70a-869c7fbf7388-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.391112 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.391636 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.391995 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392090 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.391131 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9023f41f-9e84-42aa-ae26-da378cf12eba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392387 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392417 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392443 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392500 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392516 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b855f8b-c1a4-4ab4-8400-c4b93a831025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedf6854-b0bf-4266-b1bf-144d192c967a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392560 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9023f41f-9e84-42aa-ae26-da378cf12eba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392577 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05464aa5-d507-4bdb-9f21-1de746c2e4ba-machine-approver-tls\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392596 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392618 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392636 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9babc041-e14e-4226-aebc-50e771089c3c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-serving-cert\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392714 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-auth-proxy-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392743 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f28s5\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-kube-api-access-f28s5\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7z2\" (UniqueName: \"kubernetes.io/projected/06b66dd7-ceae-4692-a6c6-85102bc27717-kube-api-access-jl7z2\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhldt\" (UniqueName: \"kubernetes.io/projected/c0d9b214-4fda-42e0-ad8a-fed4e0637175-kube-api-access-xhldt\") pod \"downloads-7954f5f757-svpdv\" (UID: \"c0d9b214-4fda-42e0-ad8a-fed4e0637175\") " pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedf6854-b0bf-4266-b1bf-144d192c967a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-config\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392859 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlh9r\" (UniqueName: \"kubernetes.io/projected/1cc206e2-96fb-44f2-9717-f2a4c182b776-kube-api-access-wlh9r\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392908 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dada1d-e829-4a4b-804e-2fac2553dbc4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqfx\" (UniqueName: \"kubernetes.io/projected/79cdc549-a93f-4a59-8f97-4183fe762ce2-kube-api-access-7pqfx\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392964 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cc206e2-96fb-44f2-9717-f2a4c182b776-metrics-tls\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.392987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6cxj\" (UniqueName: \"kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-cert\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393023 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dedf6854-b0bf-4266-b1bf-144d192c967a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393042 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-trusted-ca\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7js5z\" (UniqueName: \"kubernetes.io/projected/05464aa5-d507-4bdb-9f21-1de746c2e4ba-kube-api-access-7js5z\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393136 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393152 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndj55\" (UniqueName: \"kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mhv\" (UniqueName: \"kubernetes.io/projected/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-kube-api-access-58mhv\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393200 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b855f8b-c1a4-4ab4-8400-c4b93a831025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7515d5c-f17a-43b1-a70a-869c7fbf7388-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393432 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-audit-dir\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393455 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79cdc549-a93f-4a59-8f97-4183fe762ce2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393486 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dada1d-e829-4a4b-804e-2fac2553dbc4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393529 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393550 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393578 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-node-pullsecrets\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393647 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nflx\" (UniqueName: \"kubernetes.io/projected/9023f41f-9e84-42aa-ae26-da378cf12eba-kube-api-access-6nflx\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393676 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393721 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.393868 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.394588 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.394615 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.394625 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkg9c"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.388137 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.396079 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-service-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.396210 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.396796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-images\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.397311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.397718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-config\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.398593 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06b66dd7-ceae-4692-a6c6-85102bc27717-serving-cert\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.398938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84edfa0-ff1b-4a50-9019-b340b41b9f53-serving-cert\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400235 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06b66dd7-ceae-4692-a6c6-85102bc27717-trusted-ca\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400356 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bhx6w"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400590 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-svpdv"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2lckb"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.400713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-audit-dir\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.401337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9babc041-e14e-4226-aebc-50e771089c3c-config\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.401961 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.402201 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.403278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b855f8b-c1a4-4ab4-8400-c4b93a831025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.403425 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38087238-7cf9-4f55-9c71-f18caa92ec78-node-pullsecrets\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.404087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79cdc549-a93f-4a59-8f97-4183fe762ce2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.404252 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.405106 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-client\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.405558 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.405841 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.406061 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dada1d-e829-4a4b-804e-2fac2553dbc4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.406111 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.406593 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.409590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84edfa0-ff1b-4a50-9019-b340b41b9f53-config\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.409719 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9023f41f-9e84-42aa-ae26-da378cf12eba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.410105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dada1d-e829-4a4b-804e-2fac2553dbc4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.410133 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.410159 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.410939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b855f8b-c1a4-4ab4-8400-c4b93a831025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.411315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.411771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05464aa5-d507-4bdb-9f21-1de746c2e4ba-auth-proxy-config\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.411925 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.412295 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b6596"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.413036 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.413444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/05464aa5-d507-4bdb-9f21-1de746c2e4ba-machine-approver-tls\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.413727 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9023f41f-9e84-42aa-ae26-da378cf12eba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.417241 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.422247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.422695 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-serving-cert\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.430401 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7q7d9"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.432094 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.432193 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.432576 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9babc041-e14e-4226-aebc-50e771089c3c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.434233 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.434563 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.435697 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.436443 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.439163 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.446134 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.452613 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.454348 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.460844 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kqdns"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.464674 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.466343 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.469573 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hpsn"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.471300 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jzdvr"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.471842 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.472409 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.474614 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.476965 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8shpq"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.480816 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.482833 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-csb8m"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.485858 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.494463 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqqj\" (UniqueName: \"kubernetes.io/projected/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-kube-api-access-dhqqj\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.494509 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7515d5c-f17a-43b1-a70a-869c7fbf7388-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496471 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7515d5c-f17a-43b1-a70a-869c7fbf7388-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedf6854-b0bf-4266-b1bf-144d192c967a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496675 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhldt\" (UniqueName: \"kubernetes.io/projected/c0d9b214-4fda-42e0-ad8a-fed4e0637175-kube-api-access-xhldt\") pod \"downloads-7954f5f757-svpdv\" (UID: \"c0d9b214-4fda-42e0-ad8a-fed4e0637175\") " pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedf6854-b0bf-4266-b1bf-144d192c967a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496721 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlh9r\" (UniqueName: \"kubernetes.io/projected/1cc206e2-96fb-44f2-9717-f2a4c182b776-kube-api-access-wlh9r\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496752 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cc206e2-96fb-44f2-9717-f2a4c182b776-metrics-tls\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496775 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-cert\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496790 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dedf6854-b0bf-4266-b1bf-144d192c967a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58mhv\" (UniqueName: \"kubernetes.io/projected/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-kube-api-access-58mhv\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7515d5c-f17a-43b1-a70a-869c7fbf7388-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.496944 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.497757 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedf6854-b0bf-4266-b1bf-144d192c967a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.498951 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mw86p"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.499361 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.501886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedf6854-b0bf-4266-b1bf-144d192c967a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.502607 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jzdvr"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.503864 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.505713 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.507526 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.508400 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.510302 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5k47f"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.511235 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.514538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.516728 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cjgtx"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.517214 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7q7d9"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.517282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.517656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.534465 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.551232 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.570364 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.591730 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.612448 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.622802 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cc206e2-96fb-44f2-9717-f2a4c182b776-metrics-tls\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.630820 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.644109 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2lckb"] Nov 24 11:58:11 crc kubenswrapper[4782]: W1124 11:58:11.645328 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32cd8481_2e97_4cd1_9ef6_889d11defb32.slice/crio-534c28ae17be2df94b48a3a626ba9ac79eb391bbcaaac704667a343f3b7f070d WatchSource:0}: Error finding container 534c28ae17be2df94b48a3a626ba9ac79eb391bbcaaac704667a343f3b7f070d: Status 404 returned error can't find the container with id 534c28ae17be2df94b48a3a626ba9ac79eb391bbcaaac704667a343f3b7f070d Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.650041 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.670940 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.690772 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.697815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.710766 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.720549 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.730563 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.750870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.772798 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.791207 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.821320 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.830760 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.842725 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7515d5c-f17a-43b1-a70a-869c7fbf7388-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.850473 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.858259 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7515d5c-f17a-43b1-a70a-869c7fbf7388-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.871788 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.893030 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.903353 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-cert\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.905699 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.908363 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf"] Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.910875 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.950190 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.970735 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 11:58:11 crc kubenswrapper[4782]: I1124 11:58:11.991135 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.011183 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.030195 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.057031 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.070753 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.090237 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.110059 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.130105 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.152110 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.171569 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.188181 4782 generic.go:334] "Generic (PLEG): container finished" podID="32cd8481-2e97-4cd1-9ef6-889d11defb32" containerID="da16298b53b28a292ceed8f036d2443af5f1a14381c0d740836e2b984ad2ea86" exitCode=0 Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.188778 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" event={"ID":"32cd8481-2e97-4cd1-9ef6-889d11defb32","Type":"ContainerDied","Data":"da16298b53b28a292ceed8f036d2443af5f1a14381c0d740836e2b984ad2ea86"} Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.189138 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" event={"ID":"32cd8481-2e97-4cd1-9ef6-889d11defb32","Type":"ContainerStarted","Data":"534c28ae17be2df94b48a3a626ba9ac79eb391bbcaaac704667a343f3b7f070d"} Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.192094 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.193330 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" event={"ID":"6632ed0d-58eb-4873-b45b-e2750ac2267b","Type":"ContainerStarted","Data":"729309e1d37fa7f5f16b47eefcb83cfd6d0b65a4773c57a059dc0fdd3c5f14ac"} Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.193505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" event={"ID":"6632ed0d-58eb-4873-b45b-e2750ac2267b","Type":"ContainerStarted","Data":"fadc668d2c0b8c355c3050b8a54bcae890b047e0345fc5a8a6c57a96cb896015"} Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.194236 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.195956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" event={"ID":"59b13b1d-d00e-439e-ba63-29a792d3dbf6","Type":"ContainerStarted","Data":"d3bccf8fe933b061ae8bde22ae1fedcb015b76730b0820a2c52af7c5c210094a"} Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.200831 4782 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z8p62 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.204535 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.210438 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.230553 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.250418 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.270471 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.291076 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.310573 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.330960 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.350864 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.368922 4782 request.go:700] Waited for 1.004210896s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.369855 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.391945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.394609 4782 secret.go:188] Couldn't get secret openshift-console/console-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.394664 4782 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.394709 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.894682298 +0000 UTC m=+142.138516067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.394749 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.89472987 +0000 UTC m=+142.138563639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.395785 4782 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.395991 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.895977266 +0000 UTC m=+142.139811275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.396923 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.396974 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.896961475 +0000 UTC m=+142.140795234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.398048 4782 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.398095 4782 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.398111 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.898097568 +0000 UTC m=+142.141931337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.398182 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.898147189 +0000 UTC m=+142.141981188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399693 4782 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399731 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.899722305 +0000 UTC m=+142.143556074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399729 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399753 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399781 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.899769067 +0000 UTC m=+142.143603066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.399805 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.899794328 +0000 UTC m=+142.143628337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.400981 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.401142 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.901105466 +0000 UTC m=+142.144960615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.401625 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.401675 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.901664372 +0000 UTC m=+142.145498141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.402725 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.402811 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.902799655 +0000 UTC m=+142.146633644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405176 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405202 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405223 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.905213786 +0000 UTC m=+142.149047555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405249 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.905234506 +0000 UTC m=+142.149068515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405251 4782 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.405284 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.905276928 +0000 UTC m=+142.149110697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.406329 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.406360 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.406392 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.906363289 +0000 UTC m=+142.150197058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.406499 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.906462042 +0000 UTC m=+142.150296051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.407037 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.407089 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.9070783 +0000 UTC m=+142.150912069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.410712 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410729 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410792 4782 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410812 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.910786988 +0000 UTC m=+142.154620757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410865 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.91085189 +0000 UTC m=+142.154685909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410875 4782 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: E1124 11:58:12.410952 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:12.910942963 +0000 UTC m=+142.154776732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.430538 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.449782 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.470689 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.490997 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.510225 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.551462 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.571291 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.590606 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.610898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.630360 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.650528 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.674202 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.691012 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.710689 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.730910 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.750351 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.770591 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.791225 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.811300 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.829948 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.851112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.871611 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.890514 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.910780 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.920948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921102 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921147 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921194 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921637 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921766 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.921998 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922039 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922133 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922238 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922281 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922460 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.922536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.931610 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.951936 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.970447 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 11:58:12 crc kubenswrapper[4782]: I1124 11:58:12.990719 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.035463 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8sm9\" (UniqueName: \"kubernetes.io/projected/80dada1d-e829-4a4b-804e-2fac2553dbc4-kube-api-access-f8sm9\") pod \"openshift-apiserver-operator-796bbdcf4f-sbscm\" (UID: \"80dada1d-e829-4a4b-804e-2fac2553dbc4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.054119 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl2jh\" (UniqueName: \"kubernetes.io/projected/b84edfa0-ff1b-4a50-9019-b340b41b9f53-kube-api-access-wl2jh\") pod \"authentication-operator-69f744f599-mw86p\" (UID: \"b84edfa0-ff1b-4a50-9019-b340b41b9f53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.075352 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvph7\" (UniqueName: \"kubernetes.io/projected/9babc041-e14e-4226-aebc-50e771089c3c-kube-api-access-zvph7\") pod \"machine-api-operator-5694c8668f-bhx6w\" (UID: \"9babc041-e14e-4226-aebc-50e771089c3c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.085072 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckf2\" (UniqueName: \"kubernetes.io/projected/38087238-7cf9-4f55-9c71-f18caa92ec78-kube-api-access-6ckf2\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.104875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.104983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6cxj\" (UniqueName: \"kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj\") pod \"controller-manager-879f6c89f-8xv9n\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.129044 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7js5z\" (UniqueName: \"kubernetes.io/projected/05464aa5-d507-4bdb-9f21-1de746c2e4ba-kube-api-access-7js5z\") pod \"machine-approver-56656f9798-rzds7\" (UID: \"05464aa5-d507-4bdb-9f21-1de746c2e4ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.188676 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.189772 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nflx\" (UniqueName: \"kubernetes.io/projected/9023f41f-9e84-42aa-ae26-da378cf12eba-kube-api-access-6nflx\") pod \"openshift-controller-manager-operator-756b6f6bc6-4b65n\" (UID: \"9023f41f-9e84-42aa-ae26-da378cf12eba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.207404 4782 generic.go:334] "Generic (PLEG): container finished" podID="59b13b1d-d00e-439e-ba63-29a792d3dbf6" containerID="ea67928ea037a466aa70f187aa6e2dde51d58823d3fefde65ebf81eb6c2841e3" exitCode=0 Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.208316 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" event={"ID":"59b13b1d-d00e-439e-ba63-29a792d3dbf6","Type":"ContainerDied","Data":"ea67928ea037a466aa70f187aa6e2dde51d58823d3fefde65ebf81eb6c2841e3"} Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.210023 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.210713 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.219150 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.220066 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" event={"ID":"32cd8481-2e97-4cd1-9ef6-889d11defb32","Type":"ContainerStarted","Data":"bef4905965bdd2701037c1ab143a67033d8b08bb4b6145d92d9576b5499a18ff"} Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.220179 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.233740 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.243460 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.260830 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f28s5\" (UniqueName: \"kubernetes.io/projected/8b855f8b-c1a4-4ab4-8400-c4b93a831025-kube-api-access-f28s5\") pod \"cluster-image-registry-operator-dc59b4c8b-t9k49\" (UID: \"8b855f8b-c1a4-4ab4-8400-c4b93a831025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.270758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7z2\" (UniqueName: \"kubernetes.io/projected/06b66dd7-ceae-4692-a6c6-85102bc27717-kube-api-access-jl7z2\") pod \"console-operator-58897d9998-kqdns\" (UID: \"06b66dd7-ceae-4692-a6c6-85102bc27717\") " pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.285291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqfx\" (UniqueName: \"kubernetes.io/projected/79cdc549-a93f-4a59-8f97-4183fe762ce2-kube-api-access-7pqfx\") pod \"cluster-samples-operator-665b6dd947-9dcvb\" (UID: \"79cdc549-a93f-4a59-8f97-4183fe762ce2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.290557 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.313702 4782 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.331541 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.357669 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.360795 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.369150 4782 request.go:700] Waited for 1.896468774s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.369234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.370555 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.387139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.391479 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.398722 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.438172 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqqj\" (UniqueName: \"kubernetes.io/projected/cc1d5729-b535-4a1d-89e9-f0cf5c28cd32-kube-api-access-dhqqj\") pod \"kube-storage-version-migrator-operator-b67b599dd-k8zwr\" (UID: \"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.452064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7515d5c-f17a-43b1-a70a-869c7fbf7388-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgcj\" (UID: \"e7515d5c-f17a-43b1-a70a-869c7fbf7388\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.460159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.469386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dedf6854-b0bf-4266-b1bf-144d192c967a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ft9xx\" (UID: \"dedf6854-b0bf-4266-b1bf-144d192c967a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.492424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58mhv\" (UniqueName: \"kubernetes.io/projected/950372c9-3a2e-4f3f-a1c7-04c1fa097be6-kube-api-access-58mhv\") pod \"ingress-canary-b6596\" (UID: \"950372c9-3a2e-4f3f-a1c7-04c1fa097be6\") " pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.512868 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mw86p"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.531666 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.539498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlh9r\" (UniqueName: \"kubernetes.io/projected/1cc206e2-96fb-44f2-9717-f2a4c182b776-kube-api-access-wlh9r\") pod \"dns-operator-744455d44c-7hpsn\" (UID: \"1cc206e2-96fb-44f2-9717-f2a4c182b776\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.550611 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 11:58:13 crc kubenswrapper[4782]: W1124 11:58:13.561496 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84edfa0_ff1b_4a50_9019_b340b41b9f53.slice/crio-b9da019f267cbc2c2d9e3c00963a2241406c4dab2a0cea7c1044464df450575b WatchSource:0}: Error finding container b9da019f267cbc2c2d9e3c00963a2241406c4dab2a0cea7c1044464df450575b: Status 404 returned error can't find the container with id b9da019f267cbc2c2d9e3c00963a2241406c4dab2a0cea7c1044464df450575b Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.572564 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.575714 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.607892 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.612476 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.621201 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.622302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631573 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-images\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631630 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkk2b\" (UniqueName: \"kubernetes.io/projected/97cc1a6f-1a30-4ec0-b771-87510a291869-kube-api-access-zkk2b\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631698 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65n5b\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631723 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631747 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631829 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631939 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631964 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.631978 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97cc1a6f-1a30-4ec0-b771-87510a291869-proxy-tls\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.632033 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.632068 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.632085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhkmc\" (UniqueName: \"kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.632425 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.132414602 +0000 UTC m=+143.376248371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.635957 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.643329 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.643464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-audit\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.648409 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bhx6w"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.653931 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.654916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-image-import-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.664940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b6596" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.675724 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.686860 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.696683 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.714927 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.716523 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.723311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.729591 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kqdns"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.729638 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.730883 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38087238-7cf9-4f55-9c71-f18caa92ec78-encryption-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.732560 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733467 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-cabundle\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733864 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-etcd-client\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-plugins-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733909 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-proxy-tls\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.733983 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkhj\" (UniqueName: \"kubernetes.io/projected/3d0cecf6-1037-494f-a783-682ba2b70960-kube-api-access-fpkhj\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.734063 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-srv-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.734176 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.734345 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvdt\" (UniqueName: \"kubernetes.io/projected/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-kube-api-access-tbvdt\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.734661 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.234643686 +0000 UTC m=+143.478477455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.735119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.735174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-serving-cert\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.735229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97cc1a6f-1a30-4ec0-b771-87510a291869-proxy-tls\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.735264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af5df427-f1bc-40cd-b733-5364595562fb-service-ca-bundle\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.735330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.736488 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.743671 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.748936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.750256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97cc1a6f-1a30-4ec0-b771-87510a291869-proxy-tls\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.751035 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.763971 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.767409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-certs\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.767519 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-profile-collector-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.767697 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txb5m\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-kube-api-access-txb5m\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768135 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-metrics-tls\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhkmc\" (UniqueName: \"kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b94f22-aec2-4ebe-9f18-d7a75014baa8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc48j\" (UniqueName: \"kubernetes.io/projected/af5df427-f1bc-40cd-b733-5364595562fb-kube-api-access-lc48j\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768882 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8znl\" (UniqueName: \"kubernetes.io/projected/90af26be-4cef-4b0d-b75c-d24f1be33f85-kube-api-access-z8znl\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.768965 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.769399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgmtw\" (UniqueName: \"kubernetes.io/projected/f0f305b6-07e9-46af-a3fb-3349a9bee60a-kube-api-access-tgmtw\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.769507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-images\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776181 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7k8m\" (UniqueName: \"kubernetes.io/projected/202bab70-f0f2-4c45-96a6-2e47e8abf992-kube-api-access-f7k8m\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkk2b\" (UniqueName: \"kubernetes.io/projected/97cc1a6f-1a30-4ec0-b771-87510a291869-kube-api-access-zkk2b\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776541 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65n5b\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776613 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmdn\" (UniqueName: \"kubernetes.io/projected/dd017984-4283-4306-b54c-f46e394c4523-kube-api-access-xkmdn\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776760 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jx9r\" (UniqueName: \"kubernetes.io/projected/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-kube-api-access-4jx9r\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/738a400c-28d7-488a-8706-6c74a5302686-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.776984 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxk8\" (UniqueName: \"kubernetes.io/projected/738a400c-28d7-488a-8706-6c74a5302686-kube-api-access-wdxk8\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.777017 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.277005173 +0000 UTC m=+143.520838942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.772918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-node-bootstrap-token\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b94f22-aec2-4ebe-9f18-d7a75014baa8-config\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/202bab70-f0f2-4c45-96a6-2e47e8abf992-serving-cert\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777537 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf52c96e-c39c-4bd7-a733-fad836e6b65f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.777613 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-trusted-ca\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.773504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-images\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.778135 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j84vc\" (UniqueName: \"kubernetes.io/projected/67eea2b5-babb-4b66-859a-6881a1f4f0e7-kube-api-access-j84vc\") pod \"migrator-59844c95c7-7hhxh\" (UID: \"67eea2b5-babb-4b66-859a-6881a1f4f0e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.778173 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmk5f\" (UniqueName: \"kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.778411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fsq\" (UniqueName: \"kubernetes.io/projected/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-kube-api-access-44fsq\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.778721 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrxq\" (UniqueName: \"kubernetes.io/projected/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-kube-api-access-hzrxq\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.795721 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-mountpoint-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.795768 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.795788 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-service-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.795807 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.795849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: W1124 11:58:13.779793 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06b66dd7_ceae_4692_a6c6_85102bc27717.slice/crio-7eb90d7699f60fdf549ab348d6918be551970b475b205698f31739c76526de82 WatchSource:0}: Error finding container 7eb90d7699f60fdf549ab348d6918be551970b475b205698f31739c76526de82: Status 404 returned error can't find the container with id 7eb90d7699f60fdf549ab348d6918be551970b475b205698f31739c76526de82 Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.788723 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.789065 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.797528 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8fhc\" (UniqueName: \"kubernetes.io/projected/9cec2a58-8937-4ecd-979e-9d9657548b69-kube-api-access-l8fhc\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.797597 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9q2\" (UniqueName: \"kubernetes.io/projected/09bb7a75-abe3-4131-a261-90d4d0cd045e-kube-api-access-sm9q2\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.799437 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-metrics-tls\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.802224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.802267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97cc1a6f-1a30-4ec0-b771-87510a291869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.803682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.804351 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.804503 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-stats-auth\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.804992 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.805169 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90b94f22-aec2-4ebe-9f18-d7a75014baa8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-config\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d0cecf6-1037-494f-a783-682ba2b70960-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-default-certificate\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-csi-data-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838623 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnq85\" (UniqueName: \"kubernetes.io/projected/bf52c96e-c39c-4bd7-a733-fad836e6b65f-kube-api-access-lnq85\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838642 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-metrics-certs\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-key\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-socket-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-webhook-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838769 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-srv-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-registration-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-apiservice-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838832 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838847 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90af26be-4cef-4b0d-b75c-d24f1be33f85-tmpfs\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838865 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202bab70-f0f2-4c45-96a6-2e47e8abf992-config\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-config-volume\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.838905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.807151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.827831 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49"] Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.833884 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.828450 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.839980 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.810801 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.831157 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.845113 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.850504 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.877354 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.896948 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.902089 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.903117 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzwx4\" (UniqueName: \"kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.913285 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.926534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.926639 4782 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.926684 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config podName:38087238-7cf9-4f55-9c71-f18caa92ec78 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.926670732 +0000 UTC m=+144.170504501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config") pod "apiserver-76f77b778f-qkg9c" (UID: "38087238-7cf9-4f55-9c71-f18caa92ec78") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927103 4782 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927225 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927208558 +0000 UTC m=+144.171042327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927569 4782 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927596 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927589119 +0000 UTC m=+144.171422888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927630 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927651 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927645111 +0000 UTC m=+144.171478880 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927663 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927682 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927677172 +0000 UTC m=+144.171510941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927711 4782 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927728 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config podName:a162cdd4-6657-40da-92f9-5f428fe8dd96 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927723113 +0000 UTC m=+144.171556882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config") pod "console-f9d7485db-qs4j5" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927741 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927758 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927752984 +0000 UTC m=+144.171586753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927750 4782 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927775 4782 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927795 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927790435 +0000 UTC m=+144.171624324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync secret cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.927834 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle podName:3b2d93f2-8a27-4def-af47-b6a6f04039b4 nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.927816026 +0000 UTC m=+144.171649795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-rt2c7" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4") : failed to sync configmap cache: timed out waiting for the condition Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939495 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90b94f22-aec2-4ebe-9f18-d7a75014baa8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939706 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-config\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939737 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d0cecf6-1037-494f-a783-682ba2b70960-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939767 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-default-certificate\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939782 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-csi-data-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnq85\" (UniqueName: \"kubernetes.io/projected/bf52c96e-c39c-4bd7-a733-fad836e6b65f-kube-api-access-lnq85\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-metrics-certs\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939829 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-key\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-socket-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939859 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-webhook-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939881 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-srv-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-registration-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-apiservice-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939931 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90af26be-4cef-4b0d-b75c-d24f1be33f85-tmpfs\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202bab70-f0f2-4c45-96a6-2e47e8abf992-config\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.939962 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-config-volume\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941128 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941149 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-cabundle\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941176 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-etcd-client\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-plugins-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941208 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-proxy-tls\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941231 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkhj\" (UniqueName: \"kubernetes.io/projected/3d0cecf6-1037-494f-a783-682ba2b70960-kube-api-access-fpkhj\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-srv-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbvdt\" (UniqueName: \"kubernetes.io/projected/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-kube-api-access-tbvdt\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941474 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-serving-cert\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af5df427-f1bc-40cd-b733-5364595562fb-service-ca-bundle\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941533 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941550 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-certs\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-profile-collector-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941582 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txb5m\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-kube-api-access-txb5m\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941610 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-metrics-tls\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b94f22-aec2-4ebe-9f18-d7a75014baa8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941824 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc48j\" (UniqueName: \"kubernetes.io/projected/af5df427-f1bc-40cd-b733-5364595562fb-kube-api-access-lc48j\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941840 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8znl\" (UniqueName: \"kubernetes.io/projected/90af26be-4cef-4b0d-b75c-d24f1be33f85-kube-api-access-z8znl\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941856 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgmtw\" (UniqueName: \"kubernetes.io/projected/f0f305b6-07e9-46af-a3fb-3349a9bee60a-kube-api-access-tgmtw\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941903 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7k8m\" (UniqueName: \"kubernetes.io/projected/202bab70-f0f2-4c45-96a6-2e47e8abf992-kube-api-access-f7k8m\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkmdn\" (UniqueName: \"kubernetes.io/projected/dd017984-4283-4306-b54c-f46e394c4523-kube-api-access-xkmdn\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.941987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942010 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jx9r\" (UniqueName: \"kubernetes.io/projected/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-kube-api-access-4jx9r\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942029 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/738a400c-28d7-488a-8706-6c74a5302686-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdxk8\" (UniqueName: \"kubernetes.io/projected/738a400c-28d7-488a-8706-6c74a5302686-kube-api-access-wdxk8\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-node-bootstrap-token\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942084 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b94f22-aec2-4ebe-9f18-d7a75014baa8-config\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/202bab70-f0f2-4c45-96a6-2e47e8abf992-serving-cert\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf52c96e-c39c-4bd7-a733-fad836e6b65f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942146 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-trusted-ca\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j84vc\" (UniqueName: \"kubernetes.io/projected/67eea2b5-babb-4b66-859a-6881a1f4f0e7-kube-api-access-j84vc\") pod \"migrator-59844c95c7-7hhxh\" (UID: \"67eea2b5-babb-4b66-859a-6881a1f4f0e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942199 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmk5f\" (UniqueName: \"kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942224 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fsq\" (UniqueName: \"kubernetes.io/projected/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-kube-api-access-44fsq\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrxq\" (UniqueName: \"kubernetes.io/projected/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-kube-api-access-hzrxq\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-mountpoint-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-service-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942331 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8fhc\" (UniqueName: \"kubernetes.io/projected/9cec2a58-8937-4ecd-979e-9d9657548b69-kube-api-access-l8fhc\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9q2\" (UniqueName: \"kubernetes.io/projected/09bb7a75-abe3-4131-a261-90d4d0cd045e-kube-api-access-sm9q2\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-metrics-tls\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.942403 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-stats-auth\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.940551 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-config-volume\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.944441 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.944512 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-cabundle\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.945316 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-socket-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.945582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-csi-data-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.945733 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: E1124 11:58:13.945855 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.445838272 +0000 UTC m=+143.689672041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.946722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-metrics-certs\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.947114 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9cec2a58-8937-4ecd-979e-9d9657548b69-signing-key\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.950657 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-default-certificate\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.952134 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-trusted-ca\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.952568 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-mountpoint-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.952655 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b94f22-aec2-4ebe-9f18-d7a75014baa8-config\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.952777 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.952971 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.954859 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af5df427-f1bc-40cd-b733-5364595562fb-service-ca-bundle\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.955202 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-etcd-service-ca\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.958324 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-node-bootstrap-token\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.958352 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd017984-4283-4306-b54c-f46e394c4523-config\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.960581 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.960643 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-registration-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.962060 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.962293 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90af26be-4cef-4b0d-b75c-d24f1be33f85-tmpfs\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.963025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09bb7a75-abe3-4131-a261-90d4d0cd045e-plugins-dir\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.963896 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/738a400c-28d7-488a-8706-6c74a5302686-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.963937 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af5df427-f1bc-40cd-b733-5364595562fb-stats-auth\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.960584 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-serving-cert\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.965041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202bab70-f0f2-4c45-96a6-2e47e8abf992-config\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.965125 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-certs\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.965881 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-srv-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.966224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/202bab70-f0f2-4c45-96a6-2e47e8abf992-serving-cert\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.967672 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf52c96e-c39c-4bd7-a733-fad836e6b65f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.967919 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-metrics-tls\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.968411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-proxy-tls\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.968467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-webhook-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.968758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b94f22-aec2-4ebe-9f18-d7a75014baa8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.969239 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.969486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-metrics-tls\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.969760 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90af26be-4cef-4b0d-b75c-d24f1be33f85-apiservice-cert\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.970809 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.971063 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-profile-collector-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.974847 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd017984-4283-4306-b54c-f46e394c4523-etcd-client\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.976563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f305b6-07e9-46af-a3fb-3349a9bee60a-srv-cert\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.993245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d0cecf6-1037-494f-a783-682ba2b70960-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:13 crc kubenswrapper[4782]: I1124 11:58:13.995951 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.035767 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.037459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.044427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.044840 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.544829642 +0000 UTC m=+143.788663411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.053882 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.081160 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.094262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.099209 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.111460 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.134977 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.145558 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.146034 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.646022156 +0000 UTC m=+143.889855925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.150723 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.172534 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.178602 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndj55\" (UniqueName: \"kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.180198 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hpsn"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.182386 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.196151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhldt\" (UniqueName: \"kubernetes.io/projected/c0d9b214-4fda-42e0-ad8a-fed4e0637175-kube-api-access-xhldt\") pod \"downloads-7954f5f757-svpdv\" (UID: \"c0d9b214-4fda-42e0-ad8a-fed4e0637175\") " pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:14 crc kubenswrapper[4782]: W1124 11:58:14.206419 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddedf6854_b0bf_4266_b1bf_144d192c967a.slice/crio-5ea2d02df5a852c02aeff1912157d005527347e191af4deec6bd0e04a353c1cf WatchSource:0}: Error finding container 5ea2d02df5a852c02aeff1912157d005527347e191af4deec6bd0e04a353c1cf: Status 404 returned error can't find the container with id 5ea2d02df5a852c02aeff1912157d005527347e191af4deec6bd0e04a353c1cf Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.229244 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhkmc\" (UniqueName: \"kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc\") pod \"marketplace-operator-79b997595-7z6sc\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.248899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.249602 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.74959058 +0000 UTC m=+143.993424349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.253234 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65n5b\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.253801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkk2b\" (UniqueName: \"kubernetes.io/projected/97cc1a6f-1a30-4ec0-b771-87510a291869-kube-api-access-zkk2b\") pod \"machine-config-operator-74547568cd-2v5p2\" (UID: \"97cc1a6f-1a30-4ec0-b771-87510a291869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.292562 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" event={"ID":"dedf6854-b0bf-4266-b1bf-144d192c967a","Type":"ContainerStarted","Data":"5ea2d02df5a852c02aeff1912157d005527347e191af4deec6bd0e04a353c1cf"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.295308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" event={"ID":"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32","Type":"ContainerStarted","Data":"7086f59f0b86b6f2ac336eb3597111b6c823b034a653b9dc33ff2f7bb3d0f074"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.296889 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.299526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" event={"ID":"59b13b1d-d00e-439e-ba63-29a792d3dbf6","Type":"ContainerStarted","Data":"11d0d1dfcfb26417ba90415bd09a2bdf747f23da8ca358eca8a53ccfd7e887cd"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.308155 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgmtw\" (UniqueName: \"kubernetes.io/projected/f0f305b6-07e9-46af-a3fb-3349a9bee60a-kube-api-access-tgmtw\") pod \"catalog-operator-68c6474976-v672g\" (UID: \"f0f305b6-07e9-46af-a3fb-3349a9bee60a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.308518 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" event={"ID":"9babc041-e14e-4226-aebc-50e771089c3c","Type":"ContainerStarted","Data":"5a2f03a1fa6aaa32f84a457830d43fb12af32e1a081f9c6fbcad116af93803da"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.308536 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" event={"ID":"9babc041-e14e-4226-aebc-50e771089c3c","Type":"ContainerStarted","Data":"9a043d844a9057639df9f890e45b006c9d9d06fbd210344276963a829a0b6a17"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.308546 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" event={"ID":"9babc041-e14e-4226-aebc-50e771089c3c","Type":"ContainerStarted","Data":"17b03c18ae261bdfb40a061384de41b50005c4a1f2dde9fbdc8250349e670ca7"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.311163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.312219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" event={"ID":"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3","Type":"ContainerStarted","Data":"01c505962d6166058bc5b0d1ca1bd29912764d701c294bda95ea855a78d3320c"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.312249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" event={"ID":"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3","Type":"ContainerStarted","Data":"653139f4b7d5b0024e4799b20187c6da54dfc5ddadd95a8e67583a7fec95610a"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.312960 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.316923 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" event={"ID":"b84edfa0-ff1b-4a50-9019-b340b41b9f53","Type":"ContainerStarted","Data":"d6242898b7c894231ecd7bc2a33f9da6091b3e4e2634cfe596f96d7c7844e149"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.316991 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" event={"ID":"b84edfa0-ff1b-4a50-9019-b340b41b9f53","Type":"ContainerStarted","Data":"b9da019f267cbc2c2d9e3c00963a2241406c4dab2a0cea7c1044464df450575b"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.317220 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8xv9n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.317254 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.319974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kqdns" event={"ID":"06b66dd7-ceae-4692-a6c6-85102bc27717","Type":"ContainerStarted","Data":"94e4cc212851c80eddae81708e0be19685bfb1dec9cf906935bc4f77ece4eeb0"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.319997 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kqdns" event={"ID":"06b66dd7-ceae-4692-a6c6-85102bc27717","Type":"ContainerStarted","Data":"7eb90d7699f60fdf549ab348d6918be551970b475b205698f31739c76526de82"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.320009 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.320954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" event={"ID":"79cdc549-a93f-4a59-8f97-4183fe762ce2","Type":"ContainerStarted","Data":"e418f27bb72a90fa60e2bdaee6daeb3350ad827ba8188f33a756c9973502668a"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.325844 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7k8m\" (UniqueName: \"kubernetes.io/projected/202bab70-f0f2-4c45-96a6-2e47e8abf992-kube-api-access-f7k8m\") pod \"service-ca-operator-777779d784-w9r9s\" (UID: \"202bab70-f0f2-4c45-96a6-2e47e8abf992\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.327532 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" event={"ID":"9023f41f-9e84-42aa-ae26-da378cf12eba","Type":"ContainerStarted","Data":"dae63ea6622a404f89604f394cabdc23fbc9d3766e6410cb05e0b61b942ce649"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.327561 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" event={"ID":"9023f41f-9e84-42aa-ae26-da378cf12eba","Type":"ContainerStarted","Data":"9c04d44128ed112982fd97f6d4a0a901582cb0eaa2cc7731f8111dd88ce5a88f"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.329312 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" event={"ID":"05464aa5-d507-4bdb-9f21-1de746c2e4ba","Type":"ContainerStarted","Data":"98cd27f3c0c054bd6795bbb7eaf3f58d47ab55418cae717ca7eb5129ae0cf359"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.329344 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" event={"ID":"05464aa5-d507-4bdb-9f21-1de746c2e4ba","Type":"ContainerStarted","Data":"d2b504e9afbdfda838939d38f098aa600210619e67c6298793a66078ca7e0b37"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.329475 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" event={"ID":"05464aa5-d507-4bdb-9f21-1de746c2e4ba","Type":"ContainerStarted","Data":"25a986134a50c997389f53a777986461c88df22271430a1c2a3b4fa87573c26d"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.330515 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-kqdns container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.330551 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kqdns" podUID="06b66dd7-ceae-4692-a6c6-85102bc27717" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.350132 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.350433 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.850411213 +0000 UTC m=+144.094244982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.350767 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.352356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" event={"ID":"8b855f8b-c1a4-4ab4-8400-c4b93a831025","Type":"ContainerStarted","Data":"ea1b46bf68b08a93d921345c6e08b50832cd00fea17bbb7c1ae33d2b78c8415e"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.352427 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" event={"ID":"8b855f8b-c1a4-4ab4-8400-c4b93a831025","Type":"ContainerStarted","Data":"89bf973c8248a3df3b578b3b6949b0cf4f8cda5bdcd0a8ad3664190c8be86aeb"} Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.352699 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.85269016 +0000 UTC m=+144.096523919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.353391 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.359085 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkmdn\" (UniqueName: \"kubernetes.io/projected/dd017984-4283-4306-b54c-f46e394c4523-kube-api-access-xkmdn\") pod \"etcd-operator-b45778765-csb8m\" (UID: \"dd017984-4283-4306-b54c-f46e394c4523\") " pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.360000 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.371048 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b6596"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.382875 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90b94f22-aec2-4ebe-9f18-d7a75014baa8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s7z6s\" (UID: \"90b94f22-aec2-4ebe-9f18-d7a75014baa8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.392508 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jx9r\" (UniqueName: \"kubernetes.io/projected/cae42d18-f8e2-4c6b-9b7f-c01fe2d72947-kube-api-access-4jx9r\") pod \"olm-operator-6b444d44fb-7kskc\" (UID: \"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.395728 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" event={"ID":"80dada1d-e829-4a4b-804e-2fac2553dbc4","Type":"ContainerStarted","Data":"b4a06420be8d8a4d52b02aa738f8e03eb5a9d887e7140717f5a748fc800fd03a"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.395771 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" event={"ID":"80dada1d-e829-4a4b-804e-2fac2553dbc4","Type":"ContainerStarted","Data":"103f3ce6cb3047c08d4e2b653385427bb5bedb24a6d981ed6c6b9b6b34dbd595"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.406449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" event={"ID":"1cc206e2-96fb-44f2-9717-f2a4c182b776","Type":"ContainerStarted","Data":"adea008d412267799a89114d47189391f08b6a749ffa30cd9a0b85c9fbe29bf9"} Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.412233 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmk5f\" (UniqueName: \"kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f\") pod \"collect-profiles-29399745-5cc8l\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.423663 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdxk8\" (UniqueName: \"kubernetes.io/projected/738a400c-28d7-488a-8706-6c74a5302686-kube-api-access-wdxk8\") pod \"multus-admission-controller-857f4d67dd-8shpq\" (UID: \"738a400c-28d7-488a-8706-6c74a5302686\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.451556 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.451813 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.951787223 +0000 UTC m=+144.195620992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.453771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnq85\" (UniqueName: \"kubernetes.io/projected/bf52c96e-c39c-4bd7-a733-fad836e6b65f-kube-api-access-lnq85\") pod \"package-server-manager-789f6589d5-xdzjf\" (UID: \"bf52c96e-c39c-4bd7-a733-fad836e6b65f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.454671 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.456308 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:14.956294374 +0000 UTC m=+144.200128143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.463398 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.466275 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j84vc\" (UniqueName: \"kubernetes.io/projected/67eea2b5-babb-4b66-859a-6881a1f4f0e7-kube-api-access-j84vc\") pod \"migrator-59844c95c7-7hhxh\" (UID: \"67eea2b5-babb-4b66-859a-6881a1f4f0e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.482978 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.483582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.492165 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.492545 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.496289 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbvdt\" (UniqueName: \"kubernetes.io/projected/b2ca95c8-276e-4e49-a03b-a75f8b94dc93-kube-api-access-tbvdt\") pod \"machine-config-server-cjgtx\" (UID: \"b2ca95c8-276e-4e49-a03b-a75f8b94dc93\") " pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.500988 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.508702 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fsq\" (UniqueName: \"kubernetes.io/projected/a042cc3b-be2f-4620-a3ba-332ed2ea13d2-kube-api-access-44fsq\") pod \"dns-default-jzdvr\" (UID: \"a042cc3b-be2f-4620-a3ba-332ed2ea13d2\") " pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.530284 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.537145 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrxq\" (UniqueName: \"kubernetes.io/projected/6b347fdb-e1af-48f2-9496-cbbfa885ad1e-kube-api-access-hzrxq\") pod \"machine-config-controller-84d6567774-wfsqv\" (UID: \"6b347fdb-e1af-48f2-9496-cbbfa885ad1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.545128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9q2\" (UniqueName: \"kubernetes.io/projected/09bb7a75-abe3-4131-a261-90d4d0cd045e-kube-api-access-sm9q2\") pod \"csi-hostpathplugin-7q7d9\" (UID: \"09bb7a75-abe3-4131-a261-90d4d0cd045e\") " pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.557030 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.557455 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.057440217 +0000 UTC m=+144.301273986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.565403 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8fhc\" (UniqueName: \"kubernetes.io/projected/9cec2a58-8937-4ecd-979e-9d9657548b69-kube-api-access-l8fhc\") pod \"service-ca-9c57cc56f-5k47f\" (UID: \"9cec2a58-8937-4ecd-979e-9d9657548b69\") " pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.572714 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cjgtx" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.586507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc48j\" (UniqueName: \"kubernetes.io/projected/af5df427-f1bc-40cd-b733-5364595562fb-kube-api-access-lc48j\") pod \"router-default-5444994796-49vlc\" (UID: \"af5df427-f1bc-40cd-b733-5364595562fb\") " pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.602007 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.602588 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.613788 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.619506 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.620567 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8znl\" (UniqueName: \"kubernetes.io/projected/90af26be-4cef-4b0d-b75c-d24f1be33f85-kube-api-access-z8znl\") pod \"packageserver-d55dfcdfc-tdphh\" (UID: \"90af26be-4cef-4b0d-b75c-d24f1be33f85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.631235 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:14 crc kubenswrapper[4782]: W1124 11:58:14.632107 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f305b6_07e9_46af_a3fb_3349a9bee60a.slice/crio-454b6006acd317d8d0272853cda4a791e98c624e53d037853ec96ad01e263814 WatchSource:0}: Error finding container 454b6006acd317d8d0272853cda4a791e98c624e53d037853ec96ad01e263814: Status 404 returned error can't find the container with id 454b6006acd317d8d0272853cda4a791e98c624e53d037853ec96ad01e263814 Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.639324 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkhj\" (UniqueName: \"kubernetes.io/projected/3d0cecf6-1037-494f-a783-682ba2b70960-kube-api-access-fpkhj\") pod \"control-plane-machine-set-operator-78cbb6b69f-662wl\" (UID: \"3d0cecf6-1037-494f-a783-682ba2b70960\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:14 crc kubenswrapper[4782]: W1124 11:58:14.640440 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2ca95c8_276e_4e49_a03b_a75f8b94dc93.slice/crio-b8ab00e27b1d82f61ee08a36e7d5b5f2a777af1250c3627733a94539674f1093 WatchSource:0}: Error finding container b8ab00e27b1d82f61ee08a36e7d5b5f2a777af1250c3627733a94539674f1093: Status 404 returned error can't find the container with id b8ab00e27b1d82f61ee08a36e7d5b5f2a777af1250c3627733a94539674f1093 Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.642693 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.659214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.659838 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.660196 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.160184666 +0000 UTC m=+144.404018425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.666473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.670361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txb5m\" (UniqueName: \"kubernetes.io/projected/cf6e8e63-ab44-43fa-b46e-c9a6643d331a-kube-api-access-txb5m\") pod \"ingress-operator-5b745b69d9-52jwz\" (UID: \"cf6e8e63-ab44-43fa-b46e-c9a6643d331a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.749905 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.763539 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.763941 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.263928224 +0000 UTC m=+144.507761993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.775646 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.807245 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-svpdv"] Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.824804 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.869452 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:14 crc kubenswrapper[4782]: E1124 11:58:14.871729 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.371712611 +0000 UTC m=+144.615546380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.875467 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" Nov 24 11:58:14 crc kubenswrapper[4782]: I1124 11:58:14.887596 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.998842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999196 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999220 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999238 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999277 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999305 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999391 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999431 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:14.999456 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.002034 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.502004084 +0000 UTC m=+144.745837853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.003214 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.003387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38087238-7cf9-4f55-9c71-f18caa92ec78-config\") pod \"apiserver-76f77b778f-qkg9c\" (UID: \"38087238-7cf9-4f55-9c71-f18caa92ec78\") " pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.003675 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.004334 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.016952 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.018070 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.018087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"console-f9d7485db-qs4j5\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.021011 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.028750 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rt2c7\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.078720 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.094904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.101059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.101424 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.601413566 +0000 UTC m=+144.845247335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.104415 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.147604 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.203032 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.203338 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.703323372 +0000 UTC m=+144.947157141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.306950 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.307196 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.807181494 +0000 UTC m=+145.051015263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.416892 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.417109 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.917083792 +0000 UTC m=+145.160917551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.417442 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.420094 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:15.920075629 +0000 UTC m=+145.163909398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.480592 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" event={"ID":"d162bb2e-700e-48bf-9f6c-e44b7a009a07","Type":"ContainerStarted","Data":"176520a94a6a5bc6db05a1236ce7899cfdc1b5dd78e0adada2e308d2adf24900"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.519163 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.519523 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.019506182 +0000 UTC m=+145.263339951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.523391 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" event={"ID":"1cc206e2-96fb-44f2-9717-f2a4c182b776","Type":"ContainerStarted","Data":"000b18004fba3c7cfdc35715a9f2d2cc03c0167b5f03e912d44385161b6f98c9"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.549475 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.549510 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.549522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" event={"ID":"79cdc549-a93f-4a59-8f97-4183fe762ce2","Type":"ContainerStarted","Data":"0dbcb159ef9acad8966321b5ef62eb8771a8df72bf68f2d8259f1b215531fff2"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.549551 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" event={"ID":"79cdc549-a93f-4a59-8f97-4183fe762ce2","Type":"ContainerStarted","Data":"b004a45fef13d9e8b314a7f26b49be6ff1037fcbd5e2f79aca8cdb40f2c25ac6"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.589173 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-csb8m"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.625516 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.634919 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.637347 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.137333952 +0000 UTC m=+145.381167721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.666092 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t9k49" podStartSLOduration=123.666071981 podStartE2EDuration="2m3.666071981s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:15.617548304 +0000 UTC m=+144.861382083" watchObservedRunningTime="2025-11-24 11:58:15.666071981 +0000 UTC m=+144.909905750" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.695517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-49vlc" event={"ID":"af5df427-f1bc-40cd-b733-5364595562fb","Type":"ContainerStarted","Data":"c83147f3681d585e9a707d540fce00940f3056e35154fb75dcde211692ace0ca"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.740987 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b6596" event={"ID":"950372c9-3a2e-4f3f-a1c7-04c1fa097be6","Type":"ContainerStarted","Data":"1021ad51ea69f0ef021137c9e61e97f589049a5cd889d523351b08664141e8b5"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.741036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b6596" event={"ID":"950372c9-3a2e-4f3f-a1c7-04c1fa097be6","Type":"ContainerStarted","Data":"16aa7c9506346d6fa3cb4594ffa1f3b692a975be369ad8bc1604f80507622789"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.749558 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.750551 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.250523206 +0000 UTC m=+145.494356975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.827984 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" event={"ID":"e7515d5c-f17a-43b1-a70a-869c7fbf7388","Type":"ContainerStarted","Data":"f7b3d52f32c6df629a87154bd9898a649b9377e1c90f34c746f8f6280b5facb0"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.828273 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" event={"ID":"e7515d5c-f17a-43b1-a70a-869c7fbf7388","Type":"ContainerStarted","Data":"2acbeed8358dabe0d045aa6157915aca6266154b0b4895f42e0045c6dde6175f"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.852987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.853861 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.353843563 +0000 UTC m=+145.597677332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.856917 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" podStartSLOduration=123.856902962 podStartE2EDuration="2m3.856902962s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:15.788749162 +0000 UTC m=+145.032582931" watchObservedRunningTime="2025-11-24 11:58:15.856902962 +0000 UTC m=+145.100736731" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.869645 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" event={"ID":"dedf6854-b0bf-4266-b1bf-144d192c967a","Type":"ContainerStarted","Data":"4c73757ee9b3998557fd1f8258374aa9abf3de09b87abd17673ed8f582654d41"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.902763 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" event={"ID":"f0f305b6-07e9-46af-a3fb-3349a9bee60a","Type":"ContainerStarted","Data":"9f7d9b27f5dc78b8ad4e86ba667de223e2992fb45c56d4c4be6aaa83d7605648"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.902807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" event={"ID":"f0f305b6-07e9-46af-a3fb-3349a9bee60a","Type":"ContainerStarted","Data":"454b6006acd317d8d0272853cda4a791e98c624e53d037853ec96ad01e263814"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.903662 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.904479 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" podStartSLOduration=123.9044556 podStartE2EDuration="2m3.9044556s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:15.89417625 +0000 UTC m=+145.138010019" watchObservedRunningTime="2025-11-24 11:58:15.9044556 +0000 UTC m=+145.148289379" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.906454 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.916231 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.922321 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.929281 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-svpdv" event={"ID":"c0d9b214-4fda-42e0-ad8a-fed4e0637175","Type":"ContainerStarted","Data":"c0c7b43f132cb72563aeb53b7b1e818d1d225e4bc76e24e03c8c20729891627b"} Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.930308 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.949944 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8shpq"] Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.950833 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v672g container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.950867 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" podUID="f0f305b6-07e9-46af-a3fb-3349a9bee60a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.953899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:15 crc kubenswrapper[4782]: E1124 11:58:15.954752 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.454738088 +0000 UTC m=+145.698571857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.982524 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.982569 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:15 crc kubenswrapper[4782]: I1124 11:58:15.983156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" event={"ID":"cc1d5729-b535-4a1d-89e9-f0cf5c28cd32","Type":"ContainerStarted","Data":"204799ad1a84060719cb1c669c59bc3e4177c26a97b336e6fc1ce21144b118ee"} Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.046492 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cjgtx" event={"ID":"b2ca95c8-276e-4e49-a03b-a75f8b94dc93","Type":"ContainerStarted","Data":"b8ab00e27b1d82f61ee08a36e7d5b5f2a777af1250c3627733a94539674f1093"} Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.055781 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.056070 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.556058986 +0000 UTC m=+145.799892755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.124321 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mw86p" podStartSLOduration=124.124302458 podStartE2EDuration="2m4.124302458s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.123362101 +0000 UTC m=+145.367195870" watchObservedRunningTime="2025-11-24 11:58:16.124302458 +0000 UTC m=+145.368136227" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.155892 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jzdvr"] Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.156412 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.169824 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.669800177 +0000 UTC m=+145.913633946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.262246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.262851 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.762833092 +0000 UTC m=+146.006666861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.291728 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.325452 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bhx6w" podStartSLOduration=123.3254346 podStartE2EDuration="2m3.3254346s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.198873925 +0000 UTC m=+145.442707694" watchObservedRunningTime="2025-11-24 11:58:16.3254346 +0000 UTC m=+145.569268359" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.325544 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzds7" podStartSLOduration=124.325541003 podStartE2EDuration="2m4.325541003s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.311843333 +0000 UTC m=+145.555677092" watchObservedRunningTime="2025-11-24 11:58:16.325541003 +0000 UTC m=+145.569374772" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.378868 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc"] Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.379243 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.380016 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.879996773 +0000 UTC m=+146.123830532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.386383 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k8zwr" podStartSLOduration=123.386348658 podStartE2EDuration="2m3.386348658s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.385812283 +0000 UTC m=+145.629646052" watchObservedRunningTime="2025-11-24 11:58:16.386348658 +0000 UTC m=+145.630182427" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.461271 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.462257 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.490347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.490708 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:16.990680014 +0000 UTC m=+146.234513773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.503737 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.577452 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4b65n" podStartSLOduration=124.577422746 podStartE2EDuration="2m4.577422746s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.576328594 +0000 UTC m=+145.820162353" watchObservedRunningTime="2025-11-24 11:58:16.577422746 +0000 UTC m=+145.821256505" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.591274 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.591395 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.091361063 +0000 UTC m=+146.335194832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.593078 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz"] Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.603997 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.604365 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.104353403 +0000 UTC m=+146.348187162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.628343 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbscm" podStartSLOduration=124.628327183 podStartE2EDuration="2m4.628327183s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.627182439 +0000 UTC m=+145.871016208" watchObservedRunningTime="2025-11-24 11:58:16.628327183 +0000 UTC m=+145.872160952" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.635783 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kqdns" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.704917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.705224 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.205200417 +0000 UTC m=+146.449034186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.705384 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.705736 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.205721792 +0000 UTC m=+146.449555561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.727594 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kqdns" podStartSLOduration=124.72757597 podStartE2EDuration="2m4.72757597s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:16.714711334 +0000 UTC m=+145.958545103" watchObservedRunningTime="2025-11-24 11:58:16.72757597 +0000 UTC m=+145.971409739" Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.743228 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.836572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.837091 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.337076007 +0000 UTC m=+146.580909776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.934553 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.941189 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:16 crc kubenswrapper[4782]: E1124 11:58:16.941506 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.441494805 +0000 UTC m=+146.685328574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:16 crc kubenswrapper[4782]: I1124 11:58:16.978401 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7q7d9"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.006727 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" podStartSLOduration=124.006697959 podStartE2EDuration="2m4.006697959s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.004896846 +0000 UTC m=+146.248730615" watchObservedRunningTime="2025-11-24 11:58:17.006697959 +0000 UTC m=+146.250531728" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.044019 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.044312 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.544295456 +0000 UTC m=+146.788129235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.050129 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" podStartSLOduration=124.050106466 podStartE2EDuration="2m4.050106466s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.040395252 +0000 UTC m=+146.284229021" watchObservedRunningTime="2025-11-24 11:58:17.050106466 +0000 UTC m=+146.293940245" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.074743 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" podStartSLOduration=124.074726355 podStartE2EDuration="2m4.074726355s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.072937152 +0000 UTC m=+146.316770921" watchObservedRunningTime="2025-11-24 11:58:17.074726355 +0000 UTC m=+146.318560124" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.129641 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" event={"ID":"67eea2b5-babb-4b66-859a-6881a1f4f0e7","Type":"ContainerStarted","Data":"7efebf7ce87eedb61c0b05fb77d0baab5f3381d6871b1c377895353866c036f9"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.129687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" event={"ID":"67eea2b5-babb-4b66-859a-6881a1f4f0e7","Type":"ContainerStarted","Data":"24d8ade532114cae931d34778b42baa9c62488d19e87101dfb2bda1e1ab8bce1"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.139757 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" event={"ID":"cf6e8e63-ab44-43fa-b46e-c9a6643d331a","Type":"ContainerStarted","Data":"c1672c7c0fe25b88c6ca269c0ea3db32c70840cf82bd8f45a85fd84d762ef3ea"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.145959 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ft9xx" podStartSLOduration=124.145943134 podStartE2EDuration="2m4.145943134s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.14239343 +0000 UTC m=+146.386227199" watchObservedRunningTime="2025-11-24 11:58:17.145943134 +0000 UTC m=+146.389776903" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.148115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.148524 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.648512459 +0000 UTC m=+146.892346228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.200933 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.206950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" event={"ID":"202bab70-f0f2-4c45-96a6-2e47e8abf992","Type":"ContainerStarted","Data":"9ba20e8ebf911479ddaee9cc8a7942a77986e4bdd341c982fb759a759b066183"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.207200 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" event={"ID":"202bab70-f0f2-4c45-96a6-2e47e8abf992","Type":"ContainerStarted","Data":"7443c6d681775327b383ee720d67aa8d409a358c6c9d51dcd0855afbb1715630"} Nov 24 11:58:17 crc kubenswrapper[4782]: W1124 11:58:17.236972 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90af26be_4cef_4b0d_b75c_d24f1be33f85.slice/crio-cd965c6facf9629ecbc9153f3ddffcf3725e231b98807ca8854c0e36d1637c36 WatchSource:0}: Error finding container cd965c6facf9629ecbc9153f3ddffcf3725e231b98807ca8854c0e36d1637c36: Status 404 returned error can't find the container with id cd965c6facf9629ecbc9153f3ddffcf3725e231b98807ca8854c0e36d1637c36 Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.253304 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.254518 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.754499373 +0000 UTC m=+146.998333132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.258304 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cjgtx" podStartSLOduration=6.258281493 podStartE2EDuration="6.258281493s" podCreationTimestamp="2025-11-24 11:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.195002146 +0000 UTC m=+146.438835945" watchObservedRunningTime="2025-11-24 11:58:17.258281493 +0000 UTC m=+146.502115262" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.315266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cjgtx" event={"ID":"b2ca95c8-276e-4e49-a03b-a75f8b94dc93","Type":"ContainerStarted","Data":"655dca27e4f0b5dc7f45714db9102e828bca5dacc3b0d7e3a8d6246c516eb5ad"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.340275 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5k47f"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.358240 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.359806 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.859795727 +0000 UTC m=+147.103629496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.361217 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9dcvb" podStartSLOduration=125.361200238 podStartE2EDuration="2m5.361200238s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.360155117 +0000 UTC m=+146.603988876" watchObservedRunningTime="2025-11-24 11:58:17.361200238 +0000 UTC m=+146.605034007" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.361415 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" event={"ID":"dd017984-4283-4306-b54c-f46e394c4523","Type":"ContainerStarted","Data":"a25f50af466f4de14ba021a4fe446dbce0518f875cffcf6c32f3e3a1bdfc0dcd"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.382277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" event={"ID":"90b94f22-aec2-4ebe-9f18-d7a75014baa8","Type":"ContainerStarted","Data":"3f48583bc26073e516b9eaafdc86483fecdd9b4335cdb082da2a45c07572b2d5"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.383598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" event={"ID":"1cc206e2-96fb-44f2-9717-f2a4c182b776","Type":"ContainerStarted","Data":"edb76410487426aedc820a47a1f10404eb228edcb3b55dc34c3300a4d709a691"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.437554 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkg9c"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.450907 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-49vlc" event={"ID":"af5df427-f1bc-40cd-b733-5364595562fb","Type":"ContainerStarted","Data":"b057ad7f6af1855665608d45530f0745851f7aa22192e3efd44472c0064e7147"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.461967 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2lckb" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.462029 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.473644 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" event={"ID":"6b347fdb-e1af-48f2-9496-cbbfa885ad1e","Type":"ContainerStarted","Data":"4c78831da6ee8ceaf6d854d57a49402d505bab59257ad734749f87df0ffb73ee"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.473688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" event={"ID":"6b347fdb-e1af-48f2-9496-cbbfa885ad1e","Type":"ContainerStarted","Data":"f1052971220919a9497f88e8335040d57c66b23e057af289bfb57f655ab08455"} Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.475873 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:17.975850935 +0000 UTC m=+147.219684704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.523402 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-b6596" podStartSLOduration=6.523388143 podStartE2EDuration="6.523388143s" podCreationTimestamp="2025-11-24 11:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.484967541 +0000 UTC m=+146.728801320" watchObservedRunningTime="2025-11-24 11:58:17.523388143 +0000 UTC m=+146.767221912" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.537720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" event={"ID":"97cc1a6f-1a30-4ec0-b771-87510a291869","Type":"ContainerStarted","Data":"bbd07b7106a8211be9aa3e3cad58f728c8cb9e05c665b4e506909af04d71d9ea"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.537759 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" event={"ID":"97cc1a6f-1a30-4ec0-b771-87510a291869","Type":"ContainerStarted","Data":"70a1ac66b71d52253dcf16601e31e8ff12e1cd5e040e8c983035d7a4025ebf08"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.539595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" event={"ID":"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947","Type":"ContainerStarted","Data":"205155545a546df3abf857ac15d2be2bc8410fecee140528928631b042359895"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.540177 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.553510 4782 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7kskc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.553548 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" podUID="cae42d18-f8e2-4c6b-9b7f-c01fe2d72947" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.556459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.566710 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jzdvr" event={"ID":"a042cc3b-be2f-4620-a3ba-332ed2ea13d2","Type":"ContainerStarted","Data":"e00ef23aaf78bf63d45ddf7047589f6c5f0807371ec8031f823c7bf244808a8c"} Nov 24 11:58:17 crc kubenswrapper[4782]: W1124 11:58:17.574503 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38087238_7cf9_4f55_9c71_f18caa92ec78.slice/crio-b97a25e072746a444b6577055f97127a52f324b054b85040cc25a909fc87cfd7 WatchSource:0}: Error finding container b97a25e072746a444b6577055f97127a52f324b054b85040cc25a909fc87cfd7: Status 404 returned error can't find the container with id b97a25e072746a444b6577055f97127a52f324b054b85040cc25a909fc87cfd7 Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.577477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" event={"ID":"738a400c-28d7-488a-8706-6c74a5302686","Type":"ContainerStarted","Data":"a546921cb9ff303a20b9608cb6dc27de6f4acb536be33b9256c3c29add0db3e0"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.578689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.579792 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.079776319 +0000 UTC m=+147.323610088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.588003 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" event={"ID":"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb","Type":"ContainerStarted","Data":"6ed13594d1515997999a5b6952698a7aa44d4cd9b885a874aba4b23bd2123c9c"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.588045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" event={"ID":"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb","Type":"ContainerStarted","Data":"566860156d3864d0a3bc693103e2792f04e0b95cce84713f5f3775aa1f4e9486"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.588366 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.598863 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7z6sc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.598920 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.600994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" event={"ID":"3b2d93f2-8a27-4def-af47-b6a6f04039b4","Type":"ContainerStarted","Data":"950e17a843c45e5b91d054cd22905683fb6a5fd24b2f75bcfac2b00f298fa925"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.612323 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-svpdv" podStartSLOduration=125.612307368 podStartE2EDuration="2m5.612307368s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.611947898 +0000 UTC m=+146.855781667" watchObservedRunningTime="2025-11-24 11:58:17.612307368 +0000 UTC m=+146.856141137" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.630123 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" event={"ID":"d162bb2e-700e-48bf-9f6c-e44b7a009a07","Type":"ContainerStarted","Data":"6ac812412bd57e4406e3abf47e9f007139693eb34fd65d45ea419080e07d74c6"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.632461 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.637530 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:17 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:17 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:17 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.637572 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.654607 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qs4j5" event={"ID":"a162cdd4-6657-40da-92f9-5f428fe8dd96","Type":"ContainerStarted","Data":"f2f8308c67a164faa0943c519912117e1d5b08bb15c8409a5617a8f938de46b3"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.675845 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-svpdv" event={"ID":"c0d9b214-4fda-42e0-ad8a-fed4e0637175","Type":"ContainerStarted","Data":"4f1b096be236e72de18c1180c8c75af765bd53162c3eceb06e0f332c49e0893b"} Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.679314 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgcj" podStartSLOduration=124.679290374 podStartE2EDuration="2m4.679290374s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.672783394 +0000 UTC m=+146.916617163" watchObservedRunningTime="2025-11-24 11:58:17.679290374 +0000 UTC m=+146.923124133" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.682128 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.682204 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.682677 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.683424 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.183410644 +0000 UTC m=+147.427244413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.688002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v672g" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.706464 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52qsf" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.775068 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" podStartSLOduration=125.77505206 podStartE2EDuration="2m5.77505206s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.728964334 +0000 UTC m=+146.972798103" watchObservedRunningTime="2025-11-24 11:58:17.77505206 +0000 UTC m=+147.018885829" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.776358 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl"] Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.777940 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" podStartSLOduration=124.777933654 podStartE2EDuration="2m4.777933654s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.773824034 +0000 UTC m=+147.017657803" watchObservedRunningTime="2025-11-24 11:58:17.777933654 +0000 UTC m=+147.021767423" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.786758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.793027 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.293017424 +0000 UTC m=+147.536851193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.887796 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.888346 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.388313316 +0000 UTC m=+147.632147085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.939227 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-49vlc" podStartSLOduration=125.939209372 podStartE2EDuration="2m5.939209372s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.909449043 +0000 UTC m=+147.153282812" watchObservedRunningTime="2025-11-24 11:58:17.939209372 +0000 UTC m=+147.183043141" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.954566 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7hpsn" podStartSLOduration=125.9545486 podStartE2EDuration="2m5.9545486s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.943172948 +0000 UTC m=+147.187006717" watchObservedRunningTime="2025-11-24 11:58:17.9545486 +0000 UTC m=+147.198382359" Nov 24 11:58:17 crc kubenswrapper[4782]: I1124 11:58:17.989475 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:17 crc kubenswrapper[4782]: E1124 11:58:17.989803 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.489786838 +0000 UTC m=+147.733620607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.015442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w9r9s" podStartSLOduration=125.015425637 podStartE2EDuration="2m5.015425637s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:17.980840197 +0000 UTC m=+147.224673976" watchObservedRunningTime="2025-11-24 11:58:18.015425637 +0000 UTC m=+147.259259406" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.027481 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" podStartSLOduration=126.027463538 podStartE2EDuration="2m6.027463538s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.027444878 +0000 UTC m=+147.271278657" watchObservedRunningTime="2025-11-24 11:58:18.027463538 +0000 UTC m=+147.271297317" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.092111 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.092818 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.592804456 +0000 UTC m=+147.836638225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.193691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.193986 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.693974669 +0000 UTC m=+147.937808438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.197267 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podStartSLOduration=125.197254745 podStartE2EDuration="2m5.197254745s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.091521938 +0000 UTC m=+147.335355707" watchObservedRunningTime="2025-11-24 11:58:18.197254745 +0000 UTC m=+147.441088514" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.294522 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.294791 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.794775752 +0000 UTC m=+148.038609521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.395404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.395919 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:18.895908914 +0000 UTC m=+148.139742683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.499997 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.500833 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.000816256 +0000 UTC m=+148.244650025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.602450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.603101 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.103089752 +0000 UTC m=+148.346923521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.653585 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:18 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:18 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:18 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.653636 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.693923 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qs4j5" event={"ID":"a162cdd4-6657-40da-92f9-5f428fe8dd96","Type":"ContainerStarted","Data":"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.701627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" event={"ID":"90b94f22-aec2-4ebe-9f18-d7a75014baa8","Type":"ContainerStarted","Data":"5797071ced839b2d5d1cfc7a5ff4a752183049efb84b945c754259a156021859"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.704203 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.704573 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.204559754 +0000 UTC m=+148.448393523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.713323 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" event={"ID":"bf52c96e-c39c-4bd7-a733-fad836e6b65f","Type":"ContainerStarted","Data":"1c4c2051cd690d8d42e8a9cdd6efd08ca2512ff06ab8b247795496f408e81015"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.713400 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" event={"ID":"bf52c96e-c39c-4bd7-a733-fad836e6b65f","Type":"ContainerStarted","Data":"ea64f3a544a4f17d3ac71115aa1ee029350cc3c907067d5c26d26ca3be387d95"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.724081 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qs4j5" podStartSLOduration=126.724059324 podStartE2EDuration="2m6.724059324s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.723780965 +0000 UTC m=+147.967614734" watchObservedRunningTime="2025-11-24 11:58:18.724059324 +0000 UTC m=+147.967893113" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.724285 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" event={"ID":"3d0cecf6-1037-494f-a783-682ba2b70960","Type":"ContainerStarted","Data":"0b4a9964d7a47e2269363030924b64309e153e908825cea0b55b657e6c9718e6"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.724321 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" event={"ID":"3d0cecf6-1037-494f-a783-682ba2b70960","Type":"ContainerStarted","Data":"8c4b1327eac3858504fa868c3f9590b754fc6e2e77dc0abd1986534704b4cdcd"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.733696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" event={"ID":"38087238-7cf9-4f55-9c71-f18caa92ec78","Type":"ContainerStarted","Data":"b97a25e072746a444b6577055f97127a52f324b054b85040cc25a909fc87cfd7"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.741524 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-csb8m" event={"ID":"dd017984-4283-4306-b54c-f46e394c4523","Type":"ContainerStarted","Data":"8e2cac160428f1831d99356eb4a25fd52617bac184d3a029b90c06b9c6ce0f23"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.756640 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" event={"ID":"738a400c-28d7-488a-8706-6c74a5302686","Type":"ContainerStarted","Data":"d993186c8c1d735fa29a755c99844dbdf0fd387b1863c90d3fb251d34bf6d0da"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.756684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" event={"ID":"738a400c-28d7-488a-8706-6c74a5302686","Type":"ContainerStarted","Data":"99f8465c2f1d5b6699752e9258c298e21f59dd3221e7463213b6a3434cb0addb"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.766829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" event={"ID":"90af26be-4cef-4b0d-b75c-d24f1be33f85","Type":"ContainerStarted","Data":"5bdd998421f92ae5cc0abbf9b28edf96b2d1b64bb20f21138228884f43a694f4"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.766873 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" event={"ID":"90af26be-4cef-4b0d-b75c-d24f1be33f85","Type":"ContainerStarted","Data":"cd965c6facf9629ecbc9153f3ddffcf3725e231b98807ca8854c0e36d1637c36"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.767973 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.769458 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tdphh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.769510 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" podUID="90af26be-4cef-4b0d-b75c-d24f1be33f85" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.777013 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" event={"ID":"67eea2b5-babb-4b66-859a-6881a1f4f0e7","Type":"ContainerStarted","Data":"04e1f1f437c1e51115f1b7d9f4a51f179a0df0a498854a4b1603056130d64365"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.782421 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" event={"ID":"9cec2a58-8937-4ecd-979e-9d9657548b69","Type":"ContainerStarted","Data":"ea3bb228a0842ff738eaf93d2f27958b4a705d9ae62fbefcf4f1d39260a424b5"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.782471 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" event={"ID":"9cec2a58-8937-4ecd-979e-9d9657548b69","Type":"ContainerStarted","Data":"2a16b294893324e8ce723d61af96ed263fa16f40e95854075c247da7ba17477f"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.809523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.811811 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.311799995 +0000 UTC m=+148.555633764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.812545 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" event={"ID":"cf6e8e63-ab44-43fa-b46e-c9a6643d331a","Type":"ContainerStarted","Data":"85ac38da579d85f99446884c5310cbbc4766f4aedfaff8be2127ed210ab626c2"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.812574 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" event={"ID":"cf6e8e63-ab44-43fa-b46e-c9a6643d331a","Type":"ContainerStarted","Data":"9744dc62a3901a5825b4c5d278cc51d363934dc420769455c453281ed3517edc"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.824226 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s7z6s" podStartSLOduration=125.824209757 podStartE2EDuration="2m5.824209757s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.766539824 +0000 UTC m=+148.010373593" watchObservedRunningTime="2025-11-24 11:58:18.824209757 +0000 UTC m=+148.068043526" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.825681 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8shpq" podStartSLOduration=125.8256735 podStartE2EDuration="2m5.8256735s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.817823761 +0000 UTC m=+148.061657540" watchObservedRunningTime="2025-11-24 11:58:18.8256735 +0000 UTC m=+148.069507269" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.826737 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" event={"ID":"09bb7a75-abe3-4131-a261-90d4d0cd045e","Type":"ContainerStarted","Data":"1ba05a75510d87ad705d83de9fbc92511b2eb72d4aaf9939b8bab283479b03b2"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.840698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" event={"ID":"6b347fdb-e1af-48f2-9496-cbbfa885ad1e","Type":"ContainerStarted","Data":"980795cc06115eea5782874256db71554e5a93c5034e11b6abfc97d3617f388e"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.850393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" event={"ID":"97cc1a6f-1a30-4ec0-b771-87510a291869","Type":"ContainerStarted","Data":"d52e8d0c9c6b502035de4411e553ce8cfc616fd29ad35c55fe6399e1c0b6fefb"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.879109 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" event={"ID":"cae42d18-f8e2-4c6b-9b7f-c01fe2d72947","Type":"ContainerStarted","Data":"8287db75e8f961ab691ddfc99b39e3c9862d22215b692e97e7c4f8470b865416"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.879801 4782 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7kskc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.879870 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" podUID="cae42d18-f8e2-4c6b-9b7f-c01fe2d72947" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.891546 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" event={"ID":"3b2d93f2-8a27-4def-af47-b6a6f04039b4","Type":"ContainerStarted","Data":"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.892308 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.903601 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-662wl" podStartSLOduration=125.903586845 podStartE2EDuration="2m5.903586845s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.856973254 +0000 UTC m=+148.100807023" watchObservedRunningTime="2025-11-24 11:58:18.903586845 +0000 UTC m=+148.147420614" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.904313 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jzdvr" event={"ID":"a042cc3b-be2f-4620-a3ba-332ed2ea13d2","Type":"ContainerStarted","Data":"51d40ef2f865fd42e1e049a1334a7aa22052ab7fd1360785edb2f7bcc239cc09"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.904365 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.904404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jzdvr" event={"ID":"a042cc3b-be2f-4620-a3ba-332ed2ea13d2","Type":"ContainerStarted","Data":"04e27c0fc3ee7dcd15dd1d9fadc00b81e9c97a971268afc428d2d5ed06eec67b"} Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905163 4782 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rt2c7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905206 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905358 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7z6sc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905405 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905541 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.905580 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.910036 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:18 crc kubenswrapper[4782]: E1124 11:58:18.911342 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.41131587 +0000 UTC m=+148.655149639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:18 crc kubenswrapper[4782]: I1124 11:58:18.972655 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wfsqv" podStartSLOduration=125.97263142 podStartE2EDuration="2m5.97263142s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.909650452 +0000 UTC m=+148.153484221" watchObservedRunningTime="2025-11-24 11:58:18.97263142 +0000 UTC m=+148.216465189" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.011845 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.012397 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.51236088 +0000 UTC m=+148.756194649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.045663 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" podStartSLOduration=126.045647622 podStartE2EDuration="2m6.045647622s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:18.971755425 +0000 UTC m=+148.215589184" watchObservedRunningTime="2025-11-24 11:58:19.045647622 +0000 UTC m=+148.289481411" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.048353 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5k47f" podStartSLOduration=126.048344641 podStartE2EDuration="2m6.048344641s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.044833208 +0000 UTC m=+148.288666977" watchObservedRunningTime="2025-11-24 11:58:19.048344641 +0000 UTC m=+148.292178410" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.113329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.113512 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.613484702 +0000 UTC m=+148.857318471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.113619 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.113899 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.613890754 +0000 UTC m=+148.857724523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.214822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.215144 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.71513 +0000 UTC m=+148.958963769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.220398 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-52jwz" podStartSLOduration=127.220379983 podStartE2EDuration="2m7.220379983s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.141472909 +0000 UTC m=+148.385306678" watchObservedRunningTime="2025-11-24 11:58:19.220379983 +0000 UTC m=+148.464213742" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.278353 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v5p2" podStartSLOduration=126.278337125 podStartE2EDuration="2m6.278337125s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.232655841 +0000 UTC m=+148.476489620" watchObservedRunningTime="2025-11-24 11:58:19.278337125 +0000 UTC m=+148.522170894" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.279217 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.280137 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.294266 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.296760 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7hhxh" podStartSLOduration=126.296745972 podStartE2EDuration="2m6.296745972s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.294430075 +0000 UTC m=+148.538263844" watchObservedRunningTime="2025-11-24 11:58:19.296745972 +0000 UTC m=+148.540579741" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.319190 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.319818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.319850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvqd\" (UniqueName: \"kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.319890 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.319941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.320267 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.820254239 +0000 UTC m=+149.064088008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.417858 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" podStartSLOduration=127.417843818 podStartE2EDuration="2m7.417843818s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.415944522 +0000 UTC m=+148.659778301" watchObservedRunningTime="2025-11-24 11:58:19.417843818 +0000 UTC m=+148.661677587" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.420364 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.420607 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.920579927 +0000 UTC m=+149.164413696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.420736 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.420773 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.420927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.420945 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvqd\" (UniqueName: \"kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.421018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.421332 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:19.921325779 +0000 UTC m=+149.165159548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.421666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.421746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.422278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.474207 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.475276 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.482246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvqd\" (UniqueName: \"kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd\") pod \"community-operators-ccqrw\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.484691 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.505409 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jzdvr" podStartSLOduration=8.505392313 podStartE2EDuration="8.505392313s" podCreationTimestamp="2025-11-24 11:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:19.495388991 +0000 UTC m=+148.739222760" watchObservedRunningTime="2025-11-24 11:58:19.505392313 +0000 UTC m=+148.749226082" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.507599 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.522867 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523100 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9l2w\" (UniqueName: \"kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523212 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523261 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.523303 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.523860 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.023843872 +0000 UTC m=+149.267677641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.537324 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.539634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.539971 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.572592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.616162 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.625973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.626012 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.626050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9l2w\" (UniqueName: \"kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.626087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.626440 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.626634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.626844 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.126834209 +0000 UTC m=+149.370667978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.641124 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:19 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:19 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:19 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.641311 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.665093 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.665927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.727588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.727754 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2996\" (UniqueName: \"kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.727797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.727831 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.727949 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.22793524 +0000 UTC m=+149.471769009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.745532 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.817294 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.828569 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.828635 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2996\" (UniqueName: \"kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.828666 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.828700 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.829073 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.829283 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.329272849 +0000 UTC m=+149.573106618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.829801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.836685 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.846818 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.851842 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.879126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2996\" (UniqueName: \"kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996\") pod \"community-operators-b66k6\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.909033 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.928253 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" event={"ID":"bf52c96e-c39c-4bd7-a733-fad836e6b65f","Type":"ContainerStarted","Data":"ac092341c6ef1611ec9ad9a9ee156a839badf173532696f812be41fd81f72443"} Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.929067 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.929194 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.929234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.929261 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcgv7\" (UniqueName: \"kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.929363 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:19 crc kubenswrapper[4782]: E1124 11:58:19.929435 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.429422712 +0000 UTC m=+149.673256481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.930567 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" event={"ID":"09bb7a75-abe3-4131-a261-90d4d0cd045e","Type":"ContainerStarted","Data":"90c8284657825810166d8028dc661e4ac84fc552898efc5479132c59cc0ae65c"} Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.931709 4782 generic.go:334] "Generic (PLEG): container finished" podID="38087238-7cf9-4f55-9c71-f18caa92ec78" containerID="50150a2d3933d1c8112c8cbb0fa9cf4971ee87fc873ae68b9015e13b3a2a29d7" exitCode=0 Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.932444 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" event={"ID":"38087238-7cf9-4f55-9c71-f18caa92ec78","Type":"ContainerDied","Data":"50150a2d3933d1c8112c8cbb0fa9cf4971ee87fc873ae68b9015e13b3a2a29d7"} Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.936289 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.936326 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.936405 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7z6sc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.936419 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 24 11:58:19 crc kubenswrapper[4782]: I1124 11:58:19.994928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.027192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9l2w\" (UniqueName: \"kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w\") pod \"certified-operators-wbvvs\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.039889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7kskc" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.051311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.051499 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcgv7\" (UniqueName: \"kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.051781 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.052132 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.053914 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.062488 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.079991 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.579976488 +0000 UTC m=+149.823810257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.096278 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.155865 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.156162 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.656147371 +0000 UTC m=+149.899981140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.205233 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcgv7\" (UniqueName: \"kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7\") pod \"certified-operators-fbx7s\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.259991 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.260550 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.760538989 +0000 UTC m=+150.004372758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.326559 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" podStartSLOduration=127.326542476 podStartE2EDuration="2m7.326542476s" podCreationTimestamp="2025-11-24 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:20.268723548 +0000 UTC m=+149.512557317" watchObservedRunningTime="2025-11-24 11:58:20.326542476 +0000 UTC m=+149.570376245" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.363414 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.363570 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.863547626 +0000 UTC m=+150.107381395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.363658 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.363927 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.863915527 +0000 UTC m=+150.107749296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.464788 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.465172 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:20.965154602 +0000 UTC m=+150.208988361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.471702 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.525858 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.569025 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.569326 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.069314323 +0000 UTC m=+150.313148092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.647674 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:20 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:20 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:20 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.648007 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.670201 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.670622 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.17060649 +0000 UTC m=+150.414440249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.771796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.772135 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.272123504 +0000 UTC m=+150.515957273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.874283 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.874555 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.374541494 +0000 UTC m=+150.618375263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.970385 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerStarted","Data":"5ddc922c45a5761ef74b317a7d05b99e8d6504e8a438a8354dc944c64bd20eea"} Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.970440 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerStarted","Data":"ae932cee91095f0ed5b29eaef3731c74650f1779fc6504b3b8315bfbabf8953e"} Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.970509 4782 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rt2c7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.970559 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.983115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:20 crc kubenswrapper[4782]: E1124 11:58:20.983460 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.483448803 +0000 UTC m=+150.727282572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.983754 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tdphh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:58:20 crc kubenswrapper[4782]: I1124 11:58:20.983782 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" podUID="90af26be-4cef-4b0d-b75c-d24f1be33f85" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.077928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" event={"ID":"09bb7a75-abe3-4131-a261-90d4d0cd045e","Type":"ContainerStarted","Data":"f689401dccd1b50765cf3ec75d0777a56bccffb3da6ddec6a170087c47c4a258"} Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.084681 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.085691 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.585674007 +0000 UTC m=+150.829507776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.114006 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" event={"ID":"38087238-7cf9-4f55-9c71-f18caa92ec78","Type":"ContainerStarted","Data":"9ac4d7e24cd59e1602025cd78fea66a51fa94914d97d7aec2a30591de56db130"} Nov 24 11:58:21 crc kubenswrapper[4782]: W1124 11:58:21.180978 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c27fe14110d350e854b58c8ab06fc9b39c967bb8c38827b70b0fa4d6c8b29e1e WatchSource:0}: Error finding container c27fe14110d350e854b58c8ab06fc9b39c967bb8c38827b70b0fa4d6c8b29e1e: Status 404 returned error can't find the container with id c27fe14110d350e854b58c8ab06fc9b39c967bb8c38827b70b0fa4d6c8b29e1e Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.194753 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.196570 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.696558955 +0000 UTC m=+150.940392724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.297580 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.297912 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.797896253 +0000 UTC m=+151.041730022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.406332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.406706 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:21.906690729 +0000 UTC m=+151.150524498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.525243 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.525904 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.025884329 +0000 UTC m=+151.269718098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.627982 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.628326 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.128314929 +0000 UTC m=+151.372148698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.657316 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.658430 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.661883 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.671412 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:21 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:21 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:21 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.671475 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.697035 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.729828 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.730069 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.230043039 +0000 UTC m=+151.473876798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.730144 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.730451 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.230440181 +0000 UTC m=+151.474273950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.730206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.730613 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkp9\" (UniqueName: \"kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.730651 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.835780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.835916 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.335890809 +0000 UTC m=+151.579724578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.836044 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.836087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.836140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.836163 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkp9\" (UniqueName: \"kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.837158 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.837387 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.337363362 +0000 UTC m=+151.581197131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.837696 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.877540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkp9\" (UniqueName: \"kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9\") pod \"redhat-marketplace-74j5j\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:21 crc kubenswrapper[4782]: I1124 11:58:21.936972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:21 crc kubenswrapper[4782]: E1124 11:58:21.937280 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.437263637 +0000 UTC m=+151.681097396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.014299 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.017497 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.038773 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.039090 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.53907899 +0000 UTC m=+151.782912759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.049625 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.050694 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.058924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.076494 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.110040 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.110780 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.117332 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.117563 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.125815 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tdphh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.125860 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" podUID="90af26be-4cef-4b0d-b75c-d24f1be33f85" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.126621 4782 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rt2c7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.126686 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.140634 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.140958 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.141085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrdm\" (UniqueName: \"kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.141199 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.141383 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.641349435 +0000 UTC m=+151.885183204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.142874 4782 generic.go:334] "Generic (PLEG): container finished" podID="d162bb2e-700e-48bf-9f6c-e44b7a009a07" containerID="6ac812412bd57e4406e3abf47e9f007139693eb34fd65d45ea419080e07d74c6" exitCode=0 Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.142944 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" event={"ID":"d162bb2e-700e-48bf-9f6c-e44b7a009a07","Type":"ContainerDied","Data":"6ac812412bd57e4406e3abf47e9f007139693eb34fd65d45ea419080e07d74c6"} Nov 24 11:58:22 crc kubenswrapper[4782]: W1124 11:58:22.143617 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3b0ec5_0727_4ae0_bf22_c9b8d9752abf.slice/crio-44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1 WatchSource:0}: Error finding container 44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1: Status 404 returned error can't find the container with id 44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1 Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.157046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6bef7d643765e6c0c1752ff3a2a6d11179b028ebc396c9ae90096a1bab4517e7"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.166473 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.179018 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.204923 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8605dc37b367f1c42f0f5afa993b0630ecaa9564739097a6a7dc02bf006ee01f"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.204966 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c27fe14110d350e854b58c8ab06fc9b39c967bb8c38827b70b0fa4d6c8b29e1e"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.235268 4782 generic.go:334] "Generic (PLEG): container finished" podID="0b794f0a-8fb7-4253-8d82-40630895f983" containerID="5ddc922c45a5761ef74b317a7d05b99e8d6504e8a438a8354dc944c64bd20eea" exitCode=0 Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.235464 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerDied","Data":"5ddc922c45a5761ef74b317a7d05b99e8d6504e8a438a8354dc944c64bd20eea"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.242843 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rrdm\" (UniqueName: \"kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.242963 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.243065 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.243152 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.243233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.243305 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.244127 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.244602 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.244885 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.744875798 +0000 UTC m=+151.988709567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.269362 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.280677 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" event={"ID":"09bb7a75-abe3-4131-a261-90d4d0cd045e","Type":"ContainerStarted","Data":"5230df5e37fd334f357502739e98e3678a3f0001820780af7df779254faf9a81"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.351358 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.352580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.352758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.370735 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rrdm\" (UniqueName: \"kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm\") pod \"redhat-marketplace-nj4mn\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.372052 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.384851 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.884825403 +0000 UTC m=+152.128659172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.405540 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.421112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" event={"ID":"38087238-7cf9-4f55-9c71-f18caa92ec78","Type":"ContainerStarted","Data":"51f26bb5a9710580567045adfa46e445205a2a0b4ce5e6b4e940f27ace4b37d5"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.448801 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"40eabeff9f6f5480a1f19c823e0e683bfe20334ed4839c9cd8b53ba71eded069"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.448842 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1bd860879a21ccf3a2303f3056adb1defd0af9b1a0025177ac5e2b1a30d91514"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.453938 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.454312 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:22.954297621 +0000 UTC m=+152.198131390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.472224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.477146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerStarted","Data":"5f4ab4fdc557c2996d8b56fa28ee9f30ee96de0de1f208916964589dbb7aad0f"} Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.504251 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.525964 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.526061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.537338 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.538791 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" podStartSLOduration=130.538778808 podStartE2EDuration="2m10.538778808s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:22.537680346 +0000 UTC m=+151.781514115" watchObservedRunningTime="2025-11-24 11:58:22.538778808 +0000 UTC m=+151.782612577" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.555119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.558435 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.058418831 +0000 UTC m=+152.302252600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.651987 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.652595 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:22 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:22 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:22 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.652645 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.661269 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.677003 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.677078 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.677094 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxtb\" (UniqueName: \"kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.677134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.677401 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.177390144 +0000 UTC m=+152.421223913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.705935 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.753590 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.778943 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779235 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779339 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxtb\" (UniqueName: \"kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779500 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kb9b\" (UniqueName: \"kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.779698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.780516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.780663 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.280648279 +0000 UTC m=+152.524482048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.780942 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.804553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxtb\" (UniqueName: \"kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb\") pod \"redhat-operators-8s4nr\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.881295 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.881661 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.881684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kb9b\" (UniqueName: \"kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.881725 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.882013 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.382000318 +0000 UTC m=+152.625834087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.882278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.882347 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.906508 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.917062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kb9b\" (UniqueName: \"kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b\") pod \"redhat-operators-8m8hc\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:22 crc kubenswrapper[4782]: I1124 11:58:22.983473 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:22 crc kubenswrapper[4782]: E1124 11:58:22.983760 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.483742408 +0000 UTC m=+152.727576177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.007992 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.046953 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 11:58:23 crc kubenswrapper[4782]: W1124 11:58:23.072459 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487eea64_5acd_4cb6_a57d_3904c3c86647.slice/crio-cfd2ed5ba65bf1928284a6b1a6270ef835b41abac99e356fa491e0f7391e7c14 WatchSource:0}: Error finding container cfd2ed5ba65bf1928284a6b1a6270ef835b41abac99e356fa491e0f7391e7c14: Status 404 returned error can't find the container with id cfd2ed5ba65bf1928284a6b1a6270ef835b41abac99e356fa491e0f7391e7c14 Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.087554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.087842 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.587829587 +0000 UTC m=+152.831663366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.118356 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:58:23 crc kubenswrapper[4782]: W1124 11:58:23.178090 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ae095c_baa4_433e_b316_fc8592696a0b.slice/crio-907fbc142277dacb70b1459310076c48b177cd37934c79882a5c9646a926d30c WatchSource:0}: Error finding container 907fbc142277dacb70b1459310076c48b177cd37934c79882a5c9646a926d30c: Status 404 returned error can't find the container with id 907fbc142277dacb70b1459310076c48b177cd37934c79882a5c9646a926d30c Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.188270 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.188788 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.688773453 +0000 UTC m=+152.932607222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.284136 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.292058 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.292367 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.792356047 +0000 UTC m=+153.036189816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.393330 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.393629 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.893615124 +0000 UTC m=+153.137448893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.463844 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.496777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.497132 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:23.997117315 +0000 UTC m=+153.240951084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.530402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerStarted","Data":"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.530435 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerStarted","Data":"907fbc142277dacb70b1459310076c48b177cd37934c79882a5c9646a926d30c"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.543276 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"764173f6-8950-49b1-adbe-7a0d794e71d8","Type":"ContainerStarted","Data":"06ac09da14a20ff94e44a296b26dab9fdccd37cea9851e8ef2ca1ff466018d90"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.552129 4782 generic.go:334] "Generic (PLEG): container finished" podID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerID="0a86d41d306738ce032464c78fcaa2be6319e8fed3b53518b1e5a89ee3037cd9" exitCode=0 Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.552202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerDied","Data":"0a86d41d306738ce032464c78fcaa2be6319e8fed3b53518b1e5a89ee3037cd9"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.552225 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerStarted","Data":"44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.562820 4782 generic.go:334] "Generic (PLEG): container finished" podID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerID="4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4" exitCode=0 Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.562881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerDied","Data":"4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.562903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerStarted","Data":"48bf9452c32f0952a5b28b898d85bd03658cc48beda1432fa181e67d8ce92db2"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.589917 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" event={"ID":"09bb7a75-abe3-4131-a261-90d4d0cd045e","Type":"ContainerStarted","Data":"5f104e3a70ec017a6b1ebecc6ff7e20e15e7981a5c53e34f0992985c6c3d81f5"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.597429 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.597620 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.097591588 +0000 UTC m=+153.341425357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.597734 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.598396 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.098385001 +0000 UTC m=+153.342218770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.600748 4782 generic.go:334] "Generic (PLEG): container finished" podID="2718cb8c-7abd-486c-85ea-964738689708" containerID="5aee2710beed1f7153ca4fa40d8c0864781917d0e5a6491f042debd968be0df4" exitCode=0 Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.600829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerDied","Data":"5aee2710beed1f7153ca4fa40d8c0864781917d0e5a6491f042debd968be0df4"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.607520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"aa62f399de447c77d88f5b5d186c16510d0fb28f3a7ed3bed68d68e236a16ddb"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.607711 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.611148 4782 generic.go:334] "Generic (PLEG): container finished" podID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerID="1e8c98d00d42496d3c77674900001dd332b179930c68e5b5b5f518361ca1a062" exitCode=0 Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.611475 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74j5j" event={"ID":"487eea64-5acd-4cb6-a57d-3904c3c86647","Type":"ContainerDied","Data":"1e8c98d00d42496d3c77674900001dd332b179930c68e5b5b5f518361ca1a062"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.611537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74j5j" event={"ID":"487eea64-5acd-4cb6-a57d-3904c3c86647","Type":"ContainerStarted","Data":"cfd2ed5ba65bf1928284a6b1a6270ef835b41abac99e356fa491e0f7391e7c14"} Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.639734 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.641258 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:23 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:23 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:23 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.641490 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.706955 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.708281 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.208262059 +0000 UTC m=+153.452095828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.736175 4782 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.808817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.809415 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.309403172 +0000 UTC m=+153.553236941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:23 crc kubenswrapper[4782]: I1124 11:58:23.909851 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:23 crc kubenswrapper[4782]: E1124 11:58:23.910108 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.410094261 +0000 UTC m=+153.653928030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.011092 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.011502 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.511485471 +0000 UTC m=+153.755319240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.019785 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.033778 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7q7d9" podStartSLOduration=13.033761852 podStartE2EDuration="13.033761852s" podCreationTimestamp="2025-11-24 11:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:23.775816101 +0000 UTC m=+153.019649870" watchObservedRunningTime="2025-11-24 11:58:24.033761852 +0000 UTC m=+153.277595621" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.112803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume\") pod \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.112920 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.112996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmk5f\" (UniqueName: \"kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f\") pod \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.113040 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume\") pod \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\" (UID: \"d162bb2e-700e-48bf-9f6c-e44b7a009a07\") " Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.114113 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume" (OuterVolumeSpecName: "config-volume") pod "d162bb2e-700e-48bf-9f6c-e44b7a009a07" (UID: "d162bb2e-700e-48bf-9f6c-e44b7a009a07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.114138 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.614110837 +0000 UTC m=+153.857944606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.121814 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d162bb2e-700e-48bf-9f6c-e44b7a009a07" (UID: "d162bb2e-700e-48bf-9f6c-e44b7a009a07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.126760 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f" (OuterVolumeSpecName: "kube-api-access-vmk5f") pod "d162bb2e-700e-48bf-9f6c-e44b7a009a07" (UID: "d162bb2e-700e-48bf-9f6c-e44b7a009a07"). InnerVolumeSpecName "kube-api-access-vmk5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.214340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.214447 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmk5f\" (UniqueName: \"kubernetes.io/projected/d162bb2e-700e-48bf-9f6c-e44b7a009a07-kube-api-access-vmk5f\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.214464 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d162bb2e-700e-48bf-9f6c-e44b7a009a07-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.214474 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162bb2e-700e-48bf-9f6c-e44b7a009a07-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.214704 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.714689184 +0000 UTC m=+153.958522953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.315440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.315649 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.81562169 +0000 UTC m=+154.059455459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.315883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.316242 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.816226808 +0000 UTC m=+154.060060587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.417504 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.417900 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.917860145 +0000 UTC m=+154.161693914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.418066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.418472 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:24.918457542 +0000 UTC m=+154.162291321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.464793 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.465068 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.464804 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.465262 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.492097 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.519044 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.519149 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.019131671 +0000 UTC m=+154.262965440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.519333 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.519684 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.019657347 +0000 UTC m=+154.263491116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.620067 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.620260 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.120215952 +0000 UTC m=+154.364049721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.620418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.620688 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.120675156 +0000 UTC m=+154.364508925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.624197 4782 generic.go:334] "Generic (PLEG): container finished" podID="764173f6-8950-49b1-adbe-7a0d794e71d8" containerID="7f997458858cae189ce68cbf46676ca984cf379184b2640310df2957101c27ee" exitCode=0 Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.624259 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"764173f6-8950-49b1-adbe-7a0d794e71d8","Type":"ContainerDied","Data":"7f997458858cae189ce68cbf46676ca984cf379184b2640310df2957101c27ee"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.632646 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.635703 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerID="329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe" exitCode=0 Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.635780 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerDied","Data":"329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.635810 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerStarted","Data":"7b2caefccae10bff17ae3fced0b3bc4098e13b17d91983955d4385dade9e092d"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.638345 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:24 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:24 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:24 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.638419 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.639847 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b78e362-39e7-43a6-8a13-046c45623920" containerID="e73184fcb9ce3831f23019d758a0040509bd9b5f7260d3aa817d360b3e4f7c4f" exitCode=0 Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.639906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerDied","Data":"e73184fcb9ce3831f23019d758a0040509bd9b5f7260d3aa817d360b3e4f7c4f"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.639929 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerStarted","Data":"60be9d023a6deb713d5cae93d6f44ebe52a9aea817d1545b3c5043010fc0fa35"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.646729 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" event={"ID":"d162bb2e-700e-48bf-9f6c-e44b7a009a07","Type":"ContainerDied","Data":"176520a94a6a5bc6db05a1236ce7899cfdc1b5dd78e0adada2e308d2adf24900"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.646761 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176520a94a6a5bc6db05a1236ce7899cfdc1b5dd78e0adada2e308d2adf24900" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.646805 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.657210 4782 generic.go:334] "Generic (PLEG): container finished" podID="35ae095c-baa4-433e-b316-fc8592696a0b" containerID="d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4" exitCode=0 Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.657419 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerDied","Data":"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4"} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.665481 4782 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T11:58:23.736199615Z","Handler":null,"Name":""} Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.723439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.723782 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.223767545 +0000 UTC m=+154.467601314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.723812 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: E1124 11:58:24.724707 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:58:25.224698083 +0000 UTC m=+154.468531852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4cfz9" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.731262 4782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.731300 4782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.755968 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tdphh" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.825045 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.829432 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.927740 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.929994 4782 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.930034 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:24 crc kubenswrapper[4782]: I1124 11:58:24.986166 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4cfz9\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.079751 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.079799 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.086340 4782 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qkg9c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]log ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]etcd ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/max-in-flight-filter ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 11:58:25 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 11:58:25 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-startinformers ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 11:58:25 crc kubenswrapper[4782]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:58:25 crc kubenswrapper[4782]: livez check failed Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.086417 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" podUID="38087238-7cf9-4f55-9c71-f18caa92ec78" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.099240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.153099 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.153134 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.158390 4782 patch_prober.go:28] interesting pod/console-f9d7485db-qs4j5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.158477 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qs4j5" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.246856 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.533208 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.636350 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:25 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:25 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:25 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.636786 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:25 crc kubenswrapper[4782]: I1124 11:58:25.892506 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 11:58:25 crc kubenswrapper[4782]: W1124 11:58:25.917918 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b47a4d_ace6_4560_89a5_b3e3ce247c74.slice/crio-b0c9015e2ac2449a65b07b13d6d00e526a30dbd3cb51be36ee30b0270f3c5aa0 WatchSource:0}: Error finding container b0c9015e2ac2449a65b07b13d6d00e526a30dbd3cb51be36ee30b0270f3c5aa0: Status 404 returned error can't find the container with id b0c9015e2ac2449a65b07b13d6d00e526a30dbd3cb51be36ee30b0270f3c5aa0 Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.154704 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.255915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir\") pod \"764173f6-8950-49b1-adbe-7a0d794e71d8\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.256044 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "764173f6-8950-49b1-adbe-7a0d794e71d8" (UID: "764173f6-8950-49b1-adbe-7a0d794e71d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.256308 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access\") pod \"764173f6-8950-49b1-adbe-7a0d794e71d8\" (UID: \"764173f6-8950-49b1-adbe-7a0d794e71d8\") " Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.256592 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764173f6-8950-49b1-adbe-7a0d794e71d8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.264608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "764173f6-8950-49b1-adbe-7a0d794e71d8" (UID: "764173f6-8950-49b1-adbe-7a0d794e71d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.357769 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764173f6-8950-49b1-adbe-7a0d794e71d8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.633837 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:26 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:26 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:26 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.633910 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.734095 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"764173f6-8950-49b1-adbe-7a0d794e71d8","Type":"ContainerDied","Data":"06ac09da14a20ff94e44a296b26dab9fdccd37cea9851e8ef2ca1ff466018d90"} Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.734123 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.734135 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ac09da14a20ff94e44a296b26dab9fdccd37cea9851e8ef2ca1ff466018d90" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.746560 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" event={"ID":"d1b47a4d-ace6-4560-89a5-b3e3ce247c74","Type":"ContainerStarted","Data":"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4"} Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.746607 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" event={"ID":"d1b47a4d-ace6-4560-89a5-b3e3ce247c74","Type":"ContainerStarted","Data":"b0c9015e2ac2449a65b07b13d6d00e526a30dbd3cb51be36ee30b0270f3c5aa0"} Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.746717 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.766566 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" podStartSLOduration=134.766546941 podStartE2EDuration="2m14.766546941s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:26.762614646 +0000 UTC m=+156.006448435" watchObservedRunningTime="2025-11-24 11:58:26.766546941 +0000 UTC m=+156.010380710" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.778637 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:58:26 crc kubenswrapper[4782]: E1124 11:58:26.778841 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d162bb2e-700e-48bf-9f6c-e44b7a009a07" containerName="collect-profiles" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.778853 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d162bb2e-700e-48bf-9f6c-e44b7a009a07" containerName="collect-profiles" Nov 24 11:58:26 crc kubenswrapper[4782]: E1124 11:58:26.778870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764173f6-8950-49b1-adbe-7a0d794e71d8" containerName="pruner" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.778876 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="764173f6-8950-49b1-adbe-7a0d794e71d8" containerName="pruner" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.778972 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="764173f6-8950-49b1-adbe-7a0d794e71d8" containerName="pruner" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.778984 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d162bb2e-700e-48bf-9f6c-e44b7a009a07" containerName="collect-profiles" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.779311 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.782091 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.783037 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.793105 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.863741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.863960 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.965438 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.965549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:26 crc kubenswrapper[4782]: I1124 11:58:26.965738 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.004132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.121024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.605707 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.641972 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:27 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:27 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:27 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.642022 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:27 crc kubenswrapper[4782]: W1124 11:58:27.681718 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3142e2bf_fbe4_464a_9eb2_991e66567b5f.slice/crio-b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf WatchSource:0}: Error finding container b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf: Status 404 returned error can't find the container with id b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf Nov 24 11:58:27 crc kubenswrapper[4782]: I1124 11:58:27.775582 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3142e2bf-fbe4-464a-9eb2-991e66567b5f","Type":"ContainerStarted","Data":"b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf"} Nov 24 11:58:28 crc kubenswrapper[4782]: I1124 11:58:28.634801 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:28 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:28 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:28 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:28 crc kubenswrapper[4782]: I1124 11:58:28.635173 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:28 crc kubenswrapper[4782]: I1124 11:58:28.822747 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3142e2bf-fbe4-464a-9eb2-991e66567b5f","Type":"ContainerStarted","Data":"89a00d72410eaf0a99d43cc6b8a869e2e293f043917c3fe65341c556225e5f2a"} Nov 24 11:58:28 crc kubenswrapper[4782]: I1124 11:58:28.839588 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.839569039 podStartE2EDuration="2.839569039s" podCreationTimestamp="2025-11-24 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:28.838761006 +0000 UTC m=+158.082594775" watchObservedRunningTime="2025-11-24 11:58:28.839569039 +0000 UTC m=+158.083402808" Nov 24 11:58:29 crc kubenswrapper[4782]: I1124 11:58:29.535450 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jzdvr" Nov 24 11:58:29 crc kubenswrapper[4782]: I1124 11:58:29.639786 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:29 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:29 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:29 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:29 crc kubenswrapper[4782]: I1124 11:58:29.639866 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:29 crc kubenswrapper[4782]: I1124 11:58:29.848080 4782 generic.go:334] "Generic (PLEG): container finished" podID="3142e2bf-fbe4-464a-9eb2-991e66567b5f" containerID="89a00d72410eaf0a99d43cc6b8a869e2e293f043917c3fe65341c556225e5f2a" exitCode=0 Nov 24 11:58:29 crc kubenswrapper[4782]: I1124 11:58:29.848170 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3142e2bf-fbe4-464a-9eb2-991e66567b5f","Type":"ContainerDied","Data":"89a00d72410eaf0a99d43cc6b8a869e2e293f043917c3fe65341c556225e5f2a"} Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.085511 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.090009 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qkg9c" Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.411130 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.411189 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.637412 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:30 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:30 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:30 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:30 crc kubenswrapper[4782]: I1124 11:58:30.637465 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.264664 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.338110 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access\") pod \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.338156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir\") pod \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\" (UID: \"3142e2bf-fbe4-464a-9eb2-991e66567b5f\") " Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.338395 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3142e2bf-fbe4-464a-9eb2-991e66567b5f" (UID: "3142e2bf-fbe4-464a-9eb2-991e66567b5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.343795 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3142e2bf-fbe4-464a-9eb2-991e66567b5f" (UID: "3142e2bf-fbe4-464a-9eb2-991e66567b5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.388947 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.438877 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.438917 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3142e2bf-fbe4-464a-9eb2-991e66567b5f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.633889 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:31 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:31 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:31 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.633982 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.873901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3142e2bf-fbe4-464a-9eb2-991e66567b5f","Type":"ContainerDied","Data":"b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf"} Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.873940 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b8ebcf9551d23b3654ee5b67d012a341132a49b1ccded8a23692a63115c3cf" Nov 24 11:58:31 crc kubenswrapper[4782]: I1124 11:58:31.874016 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:58:32 crc kubenswrapper[4782]: I1124 11:58:32.634647 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:32 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:32 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:32 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:32 crc kubenswrapper[4782]: I1124 11:58:32.634731 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:33 crc kubenswrapper[4782]: I1124 11:58:33.635068 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:33 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:33 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:33 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:33 crc kubenswrapper[4782]: I1124 11:58:33.635559 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.463937 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.463994 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-svpdv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.463994 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.464051 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-svpdv" podUID="c0d9b214-4fda-42e0-ad8a-fed4e0637175" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.634084 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:34 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:34 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:34 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:34 crc kubenswrapper[4782]: I1124 11:58:34.634144 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.081887 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.088612 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e8feb84-86f6-4afe-9563-42016a7cd6ca-metrics-certs\") pod \"network-metrics-daemon-fvr97\" (UID: \"1e8feb84-86f6-4afe-9563-42016a7cd6ca\") " pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.148398 4782 patch_prober.go:28] interesting pod/console-f9d7485db-qs4j5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.148447 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qs4j5" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.160875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvr97" Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.634776 4782 patch_prober.go:28] interesting pod/router-default-5444994796-49vlc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:58:35 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Nov 24 11:58:35 crc kubenswrapper[4782]: [+]process-running ok Nov 24 11:58:35 crc kubenswrapper[4782]: healthz check failed Nov 24 11:58:35 crc kubenswrapper[4782]: I1124 11:58:35.635112 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49vlc" podUID="af5df427-f1bc-40cd-b733-5364595562fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:58:36 crc kubenswrapper[4782]: I1124 11:58:36.642061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:36 crc kubenswrapper[4782]: I1124 11:58:36.645239 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-49vlc" Nov 24 11:58:40 crc kubenswrapper[4782]: I1124 11:58:40.639483 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fvr97"] Nov 24 11:58:44 crc kubenswrapper[4782]: I1124 11:58:44.468916 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-svpdv" Nov 24 11:58:45 crc kubenswrapper[4782]: I1124 11:58:45.155071 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:45 crc kubenswrapper[4782]: I1124 11:58:45.159445 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 11:58:45 crc kubenswrapper[4782]: I1124 11:58:45.254963 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 11:58:45 crc kubenswrapper[4782]: I1124 11:58:45.986039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvr97" event={"ID":"1e8feb84-86f6-4afe-9563-42016a7cd6ca","Type":"ContainerStarted","Data":"7eb9cea3fa088eac9fafaf2d21328f759308b6e2f81ffed4c32ea686735be69a"} Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.542659 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.543274 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgxtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8s4nr_openshift-marketplace(5b78e362-39e7-43a6-8a13-046c45623920): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.547652 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8s4nr" podUID="5b78e362-39e7-43a6-8a13-046c45623920" Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.603278 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.603476 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8m8hc_openshift-marketplace(1c668512-a10f-4fdb-9bd3-7730552844f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:58:53 crc kubenswrapper[4782]: E1124 11:58:53.604934 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8m8hc" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.035490 4782 generic.go:334] "Generic (PLEG): container finished" podID="35ae095c-baa4-433e-b316-fc8592696a0b" containerID="d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1" exitCode=0 Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.035770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerDied","Data":"d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.047711 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerStarted","Data":"8e264324f4a8d3fb52238fff1bb09bc4dc273431533436864326283e7762fa6c"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.050460 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerStarted","Data":"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.051901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerStarted","Data":"730cc0ba393d1fdc85aa23f8d58876318b77726717260c1c3fe11881a920b9f9"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.053196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerStarted","Data":"d6f91a6fa2437e47f54877ea204401cd456233603d30046928c2494dd6196129"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.062941 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvr97" event={"ID":"1e8feb84-86f6-4afe-9563-42016a7cd6ca","Type":"ContainerStarted","Data":"5a42d53c69777fcf2e7392b14ebc58e7766d2545cd196b29926e91b78804e42d"} Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.068059 4782 generic.go:334] "Generic (PLEG): container finished" podID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerID="45505004baebee6fe0aedfebdc4f25b0986a2237db048293f15e47993bcf0062" exitCode=0 Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.068695 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74j5j" event={"ID":"487eea64-5acd-4cb6-a57d-3904c3c86647","Type":"ContainerDied","Data":"45505004baebee6fe0aedfebdc4f25b0986a2237db048293f15e47993bcf0062"} Nov 24 11:58:54 crc kubenswrapper[4782]: E1124 11:58:54.069267 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8m8hc" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" Nov 24 11:58:54 crc kubenswrapper[4782]: E1124 11:58:54.070841 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8s4nr" podUID="5b78e362-39e7-43a6-8a13-046c45623920" Nov 24 11:58:54 crc kubenswrapper[4782]: I1124 11:58:54.671241 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xdzjf" Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.078248 4782 generic.go:334] "Generic (PLEG): container finished" podID="2718cb8c-7abd-486c-85ea-964738689708" containerID="d6f91a6fa2437e47f54877ea204401cd456233603d30046928c2494dd6196129" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.078665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerDied","Data":"d6f91a6fa2437e47f54877ea204401cd456233603d30046928c2494dd6196129"} Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.085912 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvr97" event={"ID":"1e8feb84-86f6-4afe-9563-42016a7cd6ca","Type":"ContainerStarted","Data":"18dc163d18a1aed0b2552695a56c08d9698b74dd8da47b6b5e655a597b6a7b45"} Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.091131 4782 generic.go:334] "Generic (PLEG): container finished" podID="0b794f0a-8fb7-4253-8d82-40630895f983" containerID="8e264324f4a8d3fb52238fff1bb09bc4dc273431533436864326283e7762fa6c" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.091256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerDied","Data":"8e264324f4a8d3fb52238fff1bb09bc4dc273431533436864326283e7762fa6c"} Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.093968 4782 generic.go:334] "Generic (PLEG): container finished" podID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerID="0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.094036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerDied","Data":"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0"} Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.106266 4782 generic.go:334] "Generic (PLEG): container finished" podID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerID="730cc0ba393d1fdc85aa23f8d58876318b77726717260c1c3fe11881a920b9f9" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.106305 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerDied","Data":"730cc0ba393d1fdc85aa23f8d58876318b77726717260c1c3fe11881a920b9f9"} Nov 24 11:58:55 crc kubenswrapper[4782]: I1124 11:58:55.124215 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fvr97" podStartSLOduration=163.124189293 podStartE2EDuration="2m43.124189293s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:55.118727994 +0000 UTC m=+184.362561773" watchObservedRunningTime="2025-11-24 11:58:55.124189293 +0000 UTC m=+184.368023062" Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.119658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerStarted","Data":"b7237052192ac0bd73c88b86680023ed388ee9c49c263e10d0ee05e4cb9dde13"} Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.125474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74j5j" event={"ID":"487eea64-5acd-4cb6-a57d-3904c3c86647","Type":"ContainerStarted","Data":"9b972eac5e2fca3e552d9a7c160ddfb7e38d5f5a48ccad944662891e2bd9715f"} Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.128090 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerStarted","Data":"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17"} Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.132889 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerStarted","Data":"7ff062d9c5460ae3ecdd51ef69cbc5c3401c28c7b799b5a85fb804398d46ccf8"} Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.137320 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wbvvs" podStartSLOduration=4.8642450329999996 podStartE2EDuration="37.13730514s" podCreationTimestamp="2025-11-24 11:58:19 +0000 UTC" firstStartedPulling="2025-11-24 11:58:23.609904048 +0000 UTC m=+152.853737817" lastFinishedPulling="2025-11-24 11:58:55.882964155 +0000 UTC m=+185.126797924" observedRunningTime="2025-11-24 11:58:56.136912938 +0000 UTC m=+185.380746707" watchObservedRunningTime="2025-11-24 11:58:56.13730514 +0000 UTC m=+185.381138909" Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.172743 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nj4mn" podStartSLOduration=2.178707434 podStartE2EDuration="34.172728374s" podCreationTimestamp="2025-11-24 11:58:22 +0000 UTC" firstStartedPulling="2025-11-24 11:58:23.53221123 +0000 UTC m=+152.776044999" lastFinishedPulling="2025-11-24 11:58:55.52623218 +0000 UTC m=+184.770065939" observedRunningTime="2025-11-24 11:58:56.156941513 +0000 UTC m=+185.400775282" watchObservedRunningTime="2025-11-24 11:58:56.172728374 +0000 UTC m=+185.416562143" Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.191789 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74j5j" podStartSLOduration=3.115310422 podStartE2EDuration="35.191752879s" podCreationTimestamp="2025-11-24 11:58:21 +0000 UTC" firstStartedPulling="2025-11-24 11:58:23.628547542 +0000 UTC m=+152.872381311" lastFinishedPulling="2025-11-24 11:58:55.704989999 +0000 UTC m=+184.948823768" observedRunningTime="2025-11-24 11:58:56.175392112 +0000 UTC m=+185.419225881" watchObservedRunningTime="2025-11-24 11:58:56.191752879 +0000 UTC m=+185.435586668" Nov 24 11:58:56 crc kubenswrapper[4782]: I1124 11:58:56.193790 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b66k6" podStartSLOduration=4.933145214 podStartE2EDuration="37.193772248s" podCreationTimestamp="2025-11-24 11:58:19 +0000 UTC" firstStartedPulling="2025-11-24 11:58:23.563534564 +0000 UTC m=+152.807368333" lastFinishedPulling="2025-11-24 11:58:55.824161598 +0000 UTC m=+185.067995367" observedRunningTime="2025-11-24 11:58:56.188855335 +0000 UTC m=+185.432689114" watchObservedRunningTime="2025-11-24 11:58:56.193772248 +0000 UTC m=+185.437606017" Nov 24 11:58:57 crc kubenswrapper[4782]: I1124 11:58:57.139118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerStarted","Data":"9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293"} Nov 24 11:58:57 crc kubenswrapper[4782]: I1124 11:58:57.140960 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerStarted","Data":"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830"} Nov 24 11:58:57 crc kubenswrapper[4782]: I1124 11:58:57.160016 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ccqrw" podStartSLOduration=4.473870047 podStartE2EDuration="38.159999346s" podCreationTimestamp="2025-11-24 11:58:19 +0000 UTC" firstStartedPulling="2025-11-24 11:58:22.269088925 +0000 UTC m=+151.512922694" lastFinishedPulling="2025-11-24 11:58:55.955218214 +0000 UTC m=+185.199051993" observedRunningTime="2025-11-24 11:58:57.154447504 +0000 UTC m=+186.398281273" watchObservedRunningTime="2025-11-24 11:58:57.159999346 +0000 UTC m=+186.403833115" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.580279 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.596189 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fbx7s" podStartSLOduration=8.039936812 podStartE2EDuration="40.596174306s" podCreationTimestamp="2025-11-24 11:58:19 +0000 UTC" firstStartedPulling="2025-11-24 11:58:23.567705456 +0000 UTC m=+152.811539225" lastFinishedPulling="2025-11-24 11:58:56.12394295 +0000 UTC m=+185.367776719" observedRunningTime="2025-11-24 11:58:57.184827371 +0000 UTC m=+186.428661140" watchObservedRunningTime="2025-11-24 11:58:59.596174306 +0000 UTC m=+188.840008075" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.661321 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.661613 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.994086 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.996347 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:58:59 crc kubenswrapper[4782]: I1124 11:58:59.996393 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.050637 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.096947 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.097008 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.159638 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.410776 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.411102 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.471905 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.471947 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:00 crc kubenswrapper[4782]: I1124 11:59:00.513153 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:01 crc kubenswrapper[4782]: I1124 11:59:01.199597 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 11:59:01 crc kubenswrapper[4782]: I1124 11:59:01.201173 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.018857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.018905 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.066502 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.201770 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.378712 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.406417 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.406473 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:02 crc kubenswrapper[4782]: I1124 11:59:02.473862 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:03 crc kubenswrapper[4782]: I1124 11:59:03.208104 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.275287 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.275532 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fbx7s" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="registry-server" containerID="cri-o://3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830" gracePeriod=2 Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.479288 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.783747 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.929055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities\") pod \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.929163 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcgv7\" (UniqueName: \"kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7\") pod \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.929195 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content\") pod \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\" (UID: \"127b3d48-6f6d-4009-8ecb-d31eff88cfc7\") " Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.930153 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities" (OuterVolumeSpecName: "utilities") pod "127b3d48-6f6d-4009-8ecb-d31eff88cfc7" (UID: "127b3d48-6f6d-4009-8ecb-d31eff88cfc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.941795 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7" (OuterVolumeSpecName: "kube-api-access-hcgv7") pod "127b3d48-6f6d-4009-8ecb-d31eff88cfc7" (UID: "127b3d48-6f6d-4009-8ecb-d31eff88cfc7"). InnerVolumeSpecName "kube-api-access-hcgv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:04 crc kubenswrapper[4782]: I1124 11:59:04.984146 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "127b3d48-6f6d-4009-8ecb-d31eff88cfc7" (UID: "127b3d48-6f6d-4009-8ecb-d31eff88cfc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.030075 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcgv7\" (UniqueName: \"kubernetes.io/projected/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-kube-api-access-hcgv7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.030114 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.030128 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127b3d48-6f6d-4009-8ecb-d31eff88cfc7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182494 4782 generic.go:334] "Generic (PLEG): container finished" podID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerID="3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830" exitCode=0 Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182579 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerDied","Data":"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830"} Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182589 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbx7s" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbx7s" event={"ID":"127b3d48-6f6d-4009-8ecb-d31eff88cfc7","Type":"ContainerDied","Data":"48bf9452c32f0952a5b28b898d85bd03658cc48beda1432fa181e67d8ce92db2"} Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182644 4782 scope.go:117] "RemoveContainer" containerID="3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.182707 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nj4mn" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="registry-server" containerID="cri-o://b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17" gracePeriod=2 Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.208163 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.214010 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fbx7s"] Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.216009 4782 scope.go:117] "RemoveContainer" containerID="0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.230646 4782 scope.go:117] "RemoveContainer" containerID="4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.292093 4782 scope.go:117] "RemoveContainer" containerID="3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830" Nov 24 11:59:05 crc kubenswrapper[4782]: E1124 11:59:05.293524 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830\": container with ID starting with 3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830 not found: ID does not exist" containerID="3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.293612 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830"} err="failed to get container status \"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830\": rpc error: code = NotFound desc = could not find container \"3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830\": container with ID starting with 3f755a75555e9d64f37b80d452521845e00bbffce94c636b7a4453b2b2a97830 not found: ID does not exist" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.293669 4782 scope.go:117] "RemoveContainer" containerID="0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0" Nov 24 11:59:05 crc kubenswrapper[4782]: E1124 11:59:05.294148 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0\": container with ID starting with 0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0 not found: ID does not exist" containerID="0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.294176 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0"} err="failed to get container status \"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0\": rpc error: code = NotFound desc = could not find container \"0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0\": container with ID starting with 0d0d7b8c2b4cab23df4c12c9037ab69d34d380295aa9a6e7272d08609310eed0 not found: ID does not exist" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.294199 4782 scope.go:117] "RemoveContainer" containerID="4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4" Nov 24 11:59:05 crc kubenswrapper[4782]: E1124 11:59:05.294602 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4\": container with ID starting with 4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4 not found: ID does not exist" containerID="4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.294634 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4"} err="failed to get container status \"4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4\": rpc error: code = NotFound desc = could not find container \"4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4\": container with ID starting with 4c2e2ff02887aac1077e40d186ac818074b64361c8abe77658754a14f25303e4 not found: ID does not exist" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.496992 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" path="/var/lib/kubelet/pods/127b3d48-6f6d-4009-8ecb-d31eff88cfc7/volumes" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.669744 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.740792 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities\") pod \"35ae095c-baa4-433e-b316-fc8592696a0b\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.740869 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content\") pod \"35ae095c-baa4-433e-b316-fc8592696a0b\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.740918 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rrdm\" (UniqueName: \"kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm\") pod \"35ae095c-baa4-433e-b316-fc8592696a0b\" (UID: \"35ae095c-baa4-433e-b316-fc8592696a0b\") " Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.741696 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities" (OuterVolumeSpecName: "utilities") pod "35ae095c-baa4-433e-b316-fc8592696a0b" (UID: "35ae095c-baa4-433e-b316-fc8592696a0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.743886 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm" (OuterVolumeSpecName: "kube-api-access-4rrdm") pod "35ae095c-baa4-433e-b316-fc8592696a0b" (UID: "35ae095c-baa4-433e-b316-fc8592696a0b"). InnerVolumeSpecName "kube-api-access-4rrdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.770289 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35ae095c-baa4-433e-b316-fc8592696a0b" (UID: "35ae095c-baa4-433e-b316-fc8592696a0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.841976 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.842232 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ae095c-baa4-433e-b316-fc8592696a0b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:05 crc kubenswrapper[4782]: I1124 11:59:05.842303 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rrdm\" (UniqueName: \"kubernetes.io/projected/35ae095c-baa4-433e-b316-fc8592696a0b-kube-api-access-4rrdm\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.189564 4782 generic.go:334] "Generic (PLEG): container finished" podID="35ae095c-baa4-433e-b316-fc8592696a0b" containerID="b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17" exitCode=0 Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.189634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerDied","Data":"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17"} Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.189673 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj4mn" event={"ID":"35ae095c-baa4-433e-b316-fc8592696a0b","Type":"ContainerDied","Data":"907fbc142277dacb70b1459310076c48b177cd37934c79882a5c9646a926d30c"} Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.189695 4782 scope.go:117] "RemoveContainer" containerID="b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.189798 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj4mn" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.216331 4782 scope.go:117] "RemoveContainer" containerID="d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.230214 4782 scope.go:117] "RemoveContainer" containerID="d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.237885 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.242438 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj4mn"] Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.249055 4782 scope.go:117] "RemoveContainer" containerID="b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17" Nov 24 11:59:06 crc kubenswrapper[4782]: E1124 11:59:06.249467 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17\": container with ID starting with b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17 not found: ID does not exist" containerID="b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.249529 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17"} err="failed to get container status \"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17\": rpc error: code = NotFound desc = could not find container \"b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17\": container with ID starting with b1ec5d9d7b7d849a81af5f8b93d5095ef256f437e4eb579fd01519bb8d74bb17 not found: ID does not exist" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.249567 4782 scope.go:117] "RemoveContainer" containerID="d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1" Nov 24 11:59:06 crc kubenswrapper[4782]: E1124 11:59:06.249856 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1\": container with ID starting with d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1 not found: ID does not exist" containerID="d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.249878 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1"} err="failed to get container status \"d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1\": rpc error: code = NotFound desc = could not find container \"d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1\": container with ID starting with d9fc4e66f07baae165e79e808c9685ca1509ae680a65f638cb01e10e62038ff1 not found: ID does not exist" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.249894 4782 scope.go:117] "RemoveContainer" containerID="d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4" Nov 24 11:59:06 crc kubenswrapper[4782]: E1124 11:59:06.250095 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4\": container with ID starting with d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4 not found: ID does not exist" containerID="d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4" Nov 24 11:59:06 crc kubenswrapper[4782]: I1124 11:59:06.250114 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4"} err="failed to get container status \"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4\": rpc error: code = NotFound desc = could not find container \"d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4\": container with ID starting with d0800a54e047db087604b7a9defe4ece2afd74abe98438fa8ca1202a08e6bdb4 not found: ID does not exist" Nov 24 11:59:07 crc kubenswrapper[4782]: I1124 11:59:07.208915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerStarted","Data":"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2"} Nov 24 11:59:07 crc kubenswrapper[4782]: I1124 11:59:07.497891 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" path="/var/lib/kubelet/pods/35ae095c-baa4-433e-b316-fc8592696a0b/volumes" Nov 24 11:59:08 crc kubenswrapper[4782]: I1124 11:59:08.216211 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerID="797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2" exitCode=0 Nov 24 11:59:08 crc kubenswrapper[4782]: I1124 11:59:08.216795 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerDied","Data":"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2"} Nov 24 11:59:09 crc kubenswrapper[4782]: I1124 11:59:09.224256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerStarted","Data":"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8"} Nov 24 11:59:09 crc kubenswrapper[4782]: I1124 11:59:09.255826 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8m8hc" podStartSLOduration=3.292304924 podStartE2EDuration="47.255811207s" podCreationTimestamp="2025-11-24 11:58:22 +0000 UTC" firstStartedPulling="2025-11-24 11:58:24.638263879 +0000 UTC m=+153.882097648" lastFinishedPulling="2025-11-24 11:59:08.601770162 +0000 UTC m=+197.845603931" observedRunningTime="2025-11-24 11:59:09.253217181 +0000 UTC m=+198.497050950" watchObservedRunningTime="2025-11-24 11:59:09.255811207 +0000 UTC m=+198.499644976" Nov 24 11:59:09 crc kubenswrapper[4782]: I1124 11:59:09.657453 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 11:59:10 crc kubenswrapper[4782]: I1124 11:59:10.032406 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:59:11 crc kubenswrapper[4782]: I1124 11:59:11.233184 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b78e362-39e7-43a6-8a13-046c45623920" containerID="dc3f0e9fe1d61a4724bc52f81494545deb745b23ae50906762468277aed237da" exitCode=0 Nov 24 11:59:11 crc kubenswrapper[4782]: I1124 11:59:11.233216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerDied","Data":"dc3f0e9fe1d61a4724bc52f81494545deb745b23ae50906762468277aed237da"} Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.240698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerStarted","Data":"7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62"} Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.873760 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8s4nr" podStartSLOduration=3.653936225 podStartE2EDuration="50.873745215s" podCreationTimestamp="2025-11-24 11:58:22 +0000 UTC" firstStartedPulling="2025-11-24 11:58:24.64205486 +0000 UTC m=+153.885888629" lastFinishedPulling="2025-11-24 11:59:11.86186385 +0000 UTC m=+201.105697619" observedRunningTime="2025-11-24 11:59:12.258034544 +0000 UTC m=+201.501868313" watchObservedRunningTime="2025-11-24 11:59:12.873745215 +0000 UTC m=+202.117578984" Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.874911 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.875185 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b66k6" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="registry-server" containerID="cri-o://7ff062d9c5460ae3ecdd51ef69cbc5c3401c28c7b799b5a85fb804398d46ccf8" gracePeriod=2 Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.906694 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:59:12 crc kubenswrapper[4782]: I1124 11:59:12.906747 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.009393 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.009800 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.249408 4782 generic.go:334] "Generic (PLEG): container finished" podID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerID="7ff062d9c5460ae3ecdd51ef69cbc5c3401c28c7b799b5a85fb804398d46ccf8" exitCode=0 Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.249481 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerDied","Data":"7ff062d9c5460ae3ecdd51ef69cbc5c3401c28c7b799b5a85fb804398d46ccf8"} Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.249527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b66k6" event={"ID":"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf","Type":"ContainerDied","Data":"44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1"} Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.249540 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44653b1f1913346026ffe3cb62b806123173bc122ebc2175133a6a4123e107b1" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.253397 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.330461 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2996\" (UniqueName: \"kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996\") pod \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.330533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content\") pod \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.330574 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities\") pod \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\" (UID: \"8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf\") " Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.331920 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities" (OuterVolumeSpecName: "utilities") pod "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" (UID: "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.346777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996" (OuterVolumeSpecName: "kube-api-access-k2996") pod "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" (UID: "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf"). InnerVolumeSpecName "kube-api-access-k2996". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.407832 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" (UID: "8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.434736 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2996\" (UniqueName: \"kubernetes.io/projected/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-kube-api-access-k2996\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.434973 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.434983 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:13 crc kubenswrapper[4782]: I1124 11:59:13.942891 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8s4nr" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" probeResult="failure" output=< Nov 24 11:59:13 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 11:59:13 crc kubenswrapper[4782]: > Nov 24 11:59:14 crc kubenswrapper[4782]: I1124 11:59:14.057111 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8m8hc" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="registry-server" probeResult="failure" output=< Nov 24 11:59:14 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 11:59:14 crc kubenswrapper[4782]: > Nov 24 11:59:14 crc kubenswrapper[4782]: I1124 11:59:14.253365 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b66k6" Nov 24 11:59:14 crc kubenswrapper[4782]: I1124 11:59:14.273740 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:59:14 crc kubenswrapper[4782]: I1124 11:59:14.278060 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b66k6"] Nov 24 11:59:15 crc kubenswrapper[4782]: I1124 11:59:15.502103 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" path="/var/lib/kubelet/pods/8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf/volumes" Nov 24 11:59:22 crc kubenswrapper[4782]: I1124 11:59:22.967128 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:59:23 crc kubenswrapper[4782]: I1124 11:59:23.031574 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 11:59:23 crc kubenswrapper[4782]: I1124 11:59:23.073334 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:23 crc kubenswrapper[4782]: I1124 11:59:23.114407 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.263927 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.304981 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8m8hc" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="registry-server" containerID="cri-o://d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8" gracePeriod=2 Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.703568 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.785366 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content\") pod \"1c668512-a10f-4fdb-9bd3-7730552844f5\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.785761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities\") pod \"1c668512-a10f-4fdb-9bd3-7730552844f5\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.785867 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kb9b\" (UniqueName: \"kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b\") pod \"1c668512-a10f-4fdb-9bd3-7730552844f5\" (UID: \"1c668512-a10f-4fdb-9bd3-7730552844f5\") " Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.787565 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities" (OuterVolumeSpecName: "utilities") pod "1c668512-a10f-4fdb-9bd3-7730552844f5" (UID: "1c668512-a10f-4fdb-9bd3-7730552844f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.791538 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b" (OuterVolumeSpecName: "kube-api-access-9kb9b") pod "1c668512-a10f-4fdb-9bd3-7730552844f5" (UID: "1c668512-a10f-4fdb-9bd3-7730552844f5"). InnerVolumeSpecName "kube-api-access-9kb9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.876672 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c668512-a10f-4fdb-9bd3-7730552844f5" (UID: "1c668512-a10f-4fdb-9bd3-7730552844f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.887507 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.887546 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kb9b\" (UniqueName: \"kubernetes.io/projected/1c668512-a10f-4fdb-9bd3-7730552844f5-kube-api-access-9kb9b\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:24 crc kubenswrapper[4782]: I1124 11:59:24.887559 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c668512-a10f-4fdb-9bd3-7730552844f5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.312177 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerID="d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8" exitCode=0 Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.312227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerDied","Data":"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8"} Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.312280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8m8hc" event={"ID":"1c668512-a10f-4fdb-9bd3-7730552844f5","Type":"ContainerDied","Data":"7b2caefccae10bff17ae3fced0b3bc4098e13b17d91983955d4385dade9e092d"} Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.312274 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8m8hc" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.312349 4782 scope.go:117] "RemoveContainer" containerID="d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.326959 4782 scope.go:117] "RemoveContainer" containerID="797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.346053 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.349187 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8m8hc"] Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.352715 4782 scope.go:117] "RemoveContainer" containerID="329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.371085 4782 scope.go:117] "RemoveContainer" containerID="d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8" Nov 24 11:59:25 crc kubenswrapper[4782]: E1124 11:59:25.371602 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8\": container with ID starting with d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8 not found: ID does not exist" containerID="d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.371744 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8"} err="failed to get container status \"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8\": rpc error: code = NotFound desc = could not find container \"d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8\": container with ID starting with d7830e39a3a93e732c219b8494fa87189a6c6f6edec6715b105685d6a9d15fa8 not found: ID does not exist" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.371860 4782 scope.go:117] "RemoveContainer" containerID="797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2" Nov 24 11:59:25 crc kubenswrapper[4782]: E1124 11:59:25.372340 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2\": container with ID starting with 797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2 not found: ID does not exist" containerID="797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.372399 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2"} err="failed to get container status \"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2\": rpc error: code = NotFound desc = could not find container \"797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2\": container with ID starting with 797ab49497885befab143bb2427d9366dffddeb3ca43312128e9d1ba416579d2 not found: ID does not exist" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.372427 4782 scope.go:117] "RemoveContainer" containerID="329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe" Nov 24 11:59:25 crc kubenswrapper[4782]: E1124 11:59:25.372722 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe\": container with ID starting with 329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe not found: ID does not exist" containerID="329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.372753 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe"} err="failed to get container status \"329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe\": rpc error: code = NotFound desc = could not find container \"329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe\": container with ID starting with 329627137a83d615e724dd7628d3e084324a2f9ed95be83fd117027310b53bbe not found: ID does not exist" Nov 24 11:59:25 crc kubenswrapper[4782]: I1124 11:59:25.504922 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" path="/var/lib/kubelet/pods/1c668512-a10f-4fdb-9bd3-7730552844f5/volumes" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.398880 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" containerID="cri-o://7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89" gracePeriod=15 Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.781911 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.826808 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.826866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzwx4\" (UniqueName: \"kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.826903 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.826954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.826990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827025 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827045 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827096 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827190 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.827264 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") pod \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\" (UID: \"3b2d93f2-8a27-4def-af47-b6a6f04039b4\") " Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.828539 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.829202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.829404 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.829486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.833033 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.833971 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.834403 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.834893 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.835316 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.837093 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.837487 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.838241 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.838977 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.840673 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4" (OuterVolumeSpecName: "kube-api-access-qzwx4") pod "3b2d93f2-8a27-4def-af47-b6a6f04039b4" (UID: "3b2d93f2-8a27-4def-af47-b6a6f04039b4"). InnerVolumeSpecName "kube-api-access-qzwx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928795 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928837 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzwx4\" (UniqueName: \"kubernetes.io/projected/3b2d93f2-8a27-4def-af47-b6a6f04039b4-kube-api-access-qzwx4\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928848 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928857 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928866 4782 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928875 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928883 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b2d93f2-8a27-4def-af47-b6a6f04039b4-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928893 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928903 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928914 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928923 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928932 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928940 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:27 crc kubenswrapper[4782]: I1124 11:59:27.928949 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b2d93f2-8a27-4def-af47-b6a6f04039b4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.344495 4782 generic.go:334] "Generic (PLEG): container finished" podID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerID="7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89" exitCode=0 Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.344613 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" event={"ID":"3b2d93f2-8a27-4def-af47-b6a6f04039b4","Type":"ContainerDied","Data":"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89"} Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.344675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" event={"ID":"3b2d93f2-8a27-4def-af47-b6a6f04039b4","Type":"ContainerDied","Data":"950e17a843c45e5b91d054cd22905683fb6a5fd24b2f75bcfac2b00f298fa925"} Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.344716 4782 scope.go:117] "RemoveContainer" containerID="7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89" Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.345127 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rt2c7" Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.382452 4782 scope.go:117] "RemoveContainer" containerID="7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89" Nov 24 11:59:28 crc kubenswrapper[4782]: E1124 11:59:28.383140 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89\": container with ID starting with 7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89 not found: ID does not exist" containerID="7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89" Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.383211 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89"} err="failed to get container status \"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89\": rpc error: code = NotFound desc = could not find container \"7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89\": container with ID starting with 7afde13211f57132019b94177cd833fe958fb2f570b4a42c1ae5bea1bf2e4f89 not found: ID does not exist" Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.403137 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:59:28 crc kubenswrapper[4782]: I1124 11:59:28.408723 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rt2c7"] Nov 24 11:59:29 crc kubenswrapper[4782]: I1124 11:59:29.502453 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" path="/var/lib/kubelet/pods/3b2d93f2-8a27-4def-af47-b6a6f04039b4/volumes" Nov 24 11:59:30 crc kubenswrapper[4782]: I1124 11:59:30.410533 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:59:30 crc kubenswrapper[4782]: I1124 11:59:30.410904 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:59:30 crc kubenswrapper[4782]: I1124 11:59:30.410963 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 11:59:30 crc kubenswrapper[4782]: I1124 11:59:30.411943 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:59:30 crc kubenswrapper[4782]: I1124 11:59:30.412128 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97" gracePeriod=600 Nov 24 11:59:31 crc kubenswrapper[4782]: I1124 11:59:31.370432 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97" exitCode=0 Nov 24 11:59:31 crc kubenswrapper[4782]: I1124 11:59:31.370509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97"} Nov 24 11:59:31 crc kubenswrapper[4782]: I1124 11:59:31.370842 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78"} Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.505451 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76cf47c974-cqfkl"] Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.506784 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3142e2bf-fbe4-464a-9eb2-991e66567b5f" containerName="pruner" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.506892 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3142e2bf-fbe4-464a-9eb2-991e66567b5f" containerName="pruner" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.506969 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507057 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507124 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507178 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507230 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507281 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507341 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507416 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507482 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507535 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507592 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507655 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507747 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507799 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.507857 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.507913 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.508034 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.508093 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="extract-content" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.508149 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.508199 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.508255 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.508307 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.508360 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.508432 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: E1124 11:59:35.508497 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.508676 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="extract-utilities" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509043 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c668512-a10f-4fdb-9bd3-7730552844f5" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509138 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3142e2bf-fbe4-464a-9eb2-991e66567b5f" containerName="pruner" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509218 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ae095c-baa4-433e-b316-fc8592696a0b" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509294 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="127b3d48-6f6d-4009-8ecb-d31eff88cfc7" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509422 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3b0ec5-0727-4ae0-bf22-c9b8d9752abf" containerName="registry-server" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.509635 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2d93f2-8a27-4def-af47-b6a6f04039b4" containerName="oauth-openshift" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.510604 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.519802 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.519815 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.530388 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.535089 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.535101 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.535549 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.540261 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.546827 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.547929 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.549395 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.547955 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.560423 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.562377 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.576978 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.577989 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.579270 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76cf47c974-cqfkl"] Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653301 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653452 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c25l\" (UniqueName: \"kubernetes.io/projected/b8701ac7-8598-4790-a7aa-42d7f39c925f-kube-api-access-8c25l\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653496 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653524 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-dir\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653631 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653653 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653682 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-session\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653711 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653733 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653820 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653848 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-policies\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.653868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755476 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755535 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-session\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755634 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.755664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.756802 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-policies\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.756838 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.756894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.756951 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c25l\" (UniqueName: \"kubernetes.io/projected/b8701ac7-8598-4790-a7aa-42d7f39c925f-kube-api-access-8c25l\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757002 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-dir\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757207 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-dir\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-audit-policies\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.757739 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.758336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.758935 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.761587 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.761649 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.761911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.762095 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.762603 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.763184 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-system-session\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.763341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.776014 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8701ac7-8598-4790-a7aa-42d7f39c925f-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.781085 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c25l\" (UniqueName: \"kubernetes.io/projected/b8701ac7-8598-4790-a7aa-42d7f39c925f-kube-api-access-8c25l\") pod \"oauth-openshift-76cf47c974-cqfkl\" (UID: \"b8701ac7-8598-4790-a7aa-42d7f39c925f\") " pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:35 crc kubenswrapper[4782]: I1124 11:59:35.850270 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.077979 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76cf47c974-cqfkl"] Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.414204 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" event={"ID":"b8701ac7-8598-4790-a7aa-42d7f39c925f","Type":"ContainerStarted","Data":"1132e49025121dc5e2f1934c80add5ac332c436321d33ecac7d8d5733df5aab9"} Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.414606 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.414622 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" event={"ID":"b8701ac7-8598-4790-a7aa-42d7f39c925f","Type":"ContainerStarted","Data":"4437b23a80f55ab727c363bade63f31960ca1c803cb21c20c0f3ca19f5af7da8"} Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.443382 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" podStartSLOduration=34.443354791 podStartE2EDuration="34.443354791s" podCreationTimestamp="2025-11-24 11:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:59:36.442697731 +0000 UTC m=+225.686531520" watchObservedRunningTime="2025-11-24 11:59:36.443354791 +0000 UTC m=+225.687188580" Nov 24 11:59:36 crc kubenswrapper[4782]: I1124 11:59:36.768718 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76cf47c974-cqfkl" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.135184 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd"] Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.136207 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.138939 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.139140 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.143313 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd"] Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.273406 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.273482 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgvv\" (UniqueName: \"kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.273574 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.374748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.374788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgvv\" (UniqueName: \"kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.374831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.376062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.382637 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.405696 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgvv\" (UniqueName: \"kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv\") pod \"collect-profiles-29399760-tk2xd\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.473611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:00 crc kubenswrapper[4782]: I1124 12:00:00.884225 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd"] Nov 24 12:00:01 crc kubenswrapper[4782]: I1124 12:00:01.543084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" event={"ID":"30e616a4-be07-4f13-a359-dcfcdbdec622","Type":"ContainerStarted","Data":"39e9175d652b73a83fa53269c4a718798153c99599c7c79829ae08104624dee1"} Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.260657 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.260869 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wbvvs" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="registry-server" containerID="cri-o://b7237052192ac0bd73c88b86680023ed388ee9c49c263e10d0ee05e4cb9dde13" gracePeriod=30 Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.282454 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.282894 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ccqrw" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="registry-server" containerID="cri-o://9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" gracePeriod=30 Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.290500 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.290692 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" containerID="cri-o://6ed13594d1515997999a5b6952698a7aa44d4cd9b885a874aba4b23bd2123c9c" gracePeriod=30 Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.306252 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.306506 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74j5j" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="registry-server" containerID="cri-o://9b972eac5e2fca3e552d9a7c160ddfb7e38d5f5a48ccad944662891e2bd9715f" gracePeriod=30 Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.311925 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.312433 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s4nr" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" containerID="cri-o://7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" gracePeriod=30 Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.326217 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gzb6p"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.326832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.341840 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gzb6p"] Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.500811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkbj\" (UniqueName: \"kubernetes.io/projected/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-kube-api-access-fjkbj\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.501046 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.501107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.602247 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkbj\" (UniqueName: \"kubernetes.io/projected/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-kube-api-access-fjkbj\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.604293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.604441 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.605867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.612736 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.625457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkbj\" (UniqueName: \"kubernetes.io/projected/b5fb7f2d-5841-44a3-a7cc-41b44c66cd73-kube-api-access-fjkbj\") pod \"marketplace-operator-79b997595-gzb6p\" (UID: \"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73\") " pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: I1124 12:00:02.638653 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:02 crc kubenswrapper[4782]: E1124 12:00:02.907548 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62 is running failed: container process not found" containerID="7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:02 crc kubenswrapper[4782]: E1124 12:00:02.908257 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62 is running failed: container process not found" containerID="7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:02 crc kubenswrapper[4782]: E1124 12:00:02.908543 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62 is running failed: container process not found" containerID="7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:02 crc kubenswrapper[4782]: E1124 12:00:02.908580 4782 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8s4nr" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" Nov 24 12:00:03 crc kubenswrapper[4782]: I1124 12:00:03.082241 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gzb6p"] Nov 24 12:00:03 crc kubenswrapper[4782]: I1124 12:00:03.553799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" event={"ID":"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73","Type":"ContainerStarted","Data":"213493ce7743181c5398816176c05e3c4bd75477e721b01e16d60b3e8dc67168"} Nov 24 12:00:04 crc kubenswrapper[4782]: I1124 12:00:04.485227 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7z6sc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 24 12:00:04 crc kubenswrapper[4782]: I1124 12:00:04.485661 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.271735 4782 generic.go:334] "Generic (PLEG): container finished" podID="0b794f0a-8fb7-4253-8d82-40630895f983" containerID="9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" exitCode=0 Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.271807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerDied","Data":"9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293"} Nov 24 12:00:09 crc kubenswrapper[4782]: E1124 12:00:09.618397 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293 is running failed: container process not found" containerID="9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:09 crc kubenswrapper[4782]: E1124 12:00:09.620317 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293 is running failed: container process not found" containerID="9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:09 crc kubenswrapper[4782]: E1124 12:00:09.620842 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293 is running failed: container process not found" containerID="9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:00:09 crc kubenswrapper[4782]: E1124 12:00:09.620879 4782 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-ccqrw" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="registry-server" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.740340 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.790248 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b78e362-39e7-43a6-8a13-046c45623920" containerID="7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" exitCode=0 Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.790335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerDied","Data":"7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62"} Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.792290 4782 generic.go:334] "Generic (PLEG): container finished" podID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerID="6ed13594d1515997999a5b6952698a7aa44d4cd9b885a874aba4b23bd2123c9c" exitCode=0 Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.792362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" event={"ID":"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb","Type":"ContainerDied","Data":"6ed13594d1515997999a5b6952698a7aa44d4cd9b885a874aba4b23bd2123c9c"} Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.794542 4782 generic.go:334] "Generic (PLEG): container finished" podID="2718cb8c-7abd-486c-85ea-964738689708" containerID="b7237052192ac0bd73c88b86680023ed388ee9c49c263e10d0ee05e4cb9dde13" exitCode=0 Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.794601 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerDied","Data":"b7237052192ac0bd73c88b86680023ed388ee9c49c263e10d0ee05e4cb9dde13"} Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.796229 4782 generic.go:334] "Generic (PLEG): container finished" podID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerID="9b972eac5e2fca3e552d9a7c160ddfb7e38d5f5a48ccad944662891e2bd9715f" exitCode=0 Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.796252 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74j5j" event={"ID":"487eea64-5acd-4cb6-a57d-3904c3c86647","Type":"ContainerDied","Data":"9b972eac5e2fca3e552d9a7c160ddfb7e38d5f5a48ccad944662891e2bd9715f"} Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.796273 4782 scope.go:117] "RemoveContainer" containerID="9b972eac5e2fca3e552d9a7c160ddfb7e38d5f5a48ccad944662891e2bd9715f" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.796389 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74j5j" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.811736 4782 scope.go:117] "RemoveContainer" containerID="45505004baebee6fe0aedfebdc4f25b0986a2237db048293f15e47993bcf0062" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.837170 4782 scope.go:117] "RemoveContainer" containerID="1e8c98d00d42496d3c77674900001dd332b179930c68e5b5b5f518361ca1a062" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.911127 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.914197 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.919309 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.932687 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.941164 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content\") pod \"487eea64-5acd-4cb6-a57d-3904c3c86647\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.941233 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities\") pod \"487eea64-5acd-4cb6-a57d-3904c3c86647\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.941275 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkp9\" (UniqueName: \"kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9\") pod \"487eea64-5acd-4cb6-a57d-3904c3c86647\" (UID: \"487eea64-5acd-4cb6-a57d-3904c3c86647\") " Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.942525 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities" (OuterVolumeSpecName: "utilities") pod "487eea64-5acd-4cb6-a57d-3904c3c86647" (UID: "487eea64-5acd-4cb6-a57d-3904c3c86647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.950016 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9" (OuterVolumeSpecName: "kube-api-access-nmkp9") pod "487eea64-5acd-4cb6-a57d-3904c3c86647" (UID: "487eea64-5acd-4cb6-a57d-3904c3c86647"). InnerVolumeSpecName "kube-api-access-nmkp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:09 crc kubenswrapper[4782]: I1124 12:00:09.979673 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "487eea64-5acd-4cb6-a57d-3904c3c86647" (UID: "487eea64-5acd-4cb6-a57d-3904c3c86647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.042759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content\") pod \"5b78e362-39e7-43a6-8a13-046c45623920\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.042859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content\") pod \"2718cb8c-7abd-486c-85ea-964738689708\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.042924 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhkmc\" (UniqueName: \"kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc\") pod \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.042954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvqd\" (UniqueName: \"kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd\") pod \"0b794f0a-8fb7-4253-8d82-40630895f983\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043045 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities\") pod \"0b794f0a-8fb7-4253-8d82-40630895f983\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043109 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9l2w\" (UniqueName: \"kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w\") pod \"2718cb8c-7abd-486c-85ea-964738689708\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043192 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities\") pod \"2718cb8c-7abd-486c-85ea-964738689708\" (UID: \"2718cb8c-7abd-486c-85ea-964738689708\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043224 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities\") pod \"5b78e362-39e7-43a6-8a13-046c45623920\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043260 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxtb\" (UniqueName: \"kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb\") pod \"5b78e362-39e7-43a6-8a13-046c45623920\" (UID: \"5b78e362-39e7-43a6-8a13-046c45623920\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043278 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content\") pod \"0b794f0a-8fb7-4253-8d82-40630895f983\" (UID: \"0b794f0a-8fb7-4253-8d82-40630895f983\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043298 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics\") pod \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043341 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca\") pod \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\" (UID: \"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb\") " Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043641 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043657 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/487eea64-5acd-4cb6-a57d-3904c3c86647-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.043668 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkp9\" (UniqueName: \"kubernetes.io/projected/487eea64-5acd-4cb6-a57d-3904c3c86647-kube-api-access-nmkp9\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.045464 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities" (OuterVolumeSpecName: "utilities") pod "0b794f0a-8fb7-4253-8d82-40630895f983" (UID: "0b794f0a-8fb7-4253-8d82-40630895f983"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.045489 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities" (OuterVolumeSpecName: "utilities") pod "5b78e362-39e7-43a6-8a13-046c45623920" (UID: "5b78e362-39e7-43a6-8a13-046c45623920"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.046292 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities" (OuterVolumeSpecName: "utilities") pod "2718cb8c-7abd-486c-85ea-964738689708" (UID: "2718cb8c-7abd-486c-85ea-964738689708"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.047840 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc" (OuterVolumeSpecName: "kube-api-access-qhkmc") pod "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" (UID: "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb"). InnerVolumeSpecName "kube-api-access-qhkmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.047980 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb" (OuterVolumeSpecName: "kube-api-access-sgxtb") pod "5b78e362-39e7-43a6-8a13-046c45623920" (UID: "5b78e362-39e7-43a6-8a13-046c45623920"). InnerVolumeSpecName "kube-api-access-sgxtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.048640 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" (UID: "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.050892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w" (OuterVolumeSpecName: "kube-api-access-h9l2w") pod "2718cb8c-7abd-486c-85ea-964738689708" (UID: "2718cb8c-7abd-486c-85ea-964738689708"). InnerVolumeSpecName "kube-api-access-h9l2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.051929 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" (UID: "f48964dd-f8a3-4e2d-8df8-78bbe81a68eb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.052575 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd" (OuterVolumeSpecName: "kube-api-access-lkvqd") pod "0b794f0a-8fb7-4253-8d82-40630895f983" (UID: "0b794f0a-8fb7-4253-8d82-40630895f983"). InnerVolumeSpecName "kube-api-access-lkvqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.094049 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2718cb8c-7abd-486c-85ea-964738689708" (UID: "2718cb8c-7abd-486c-85ea-964738689708"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.096964 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b794f0a-8fb7-4253-8d82-40630895f983" (UID: "0b794f0a-8fb7-4253-8d82-40630895f983"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.120776 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.126200 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74j5j"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.135824 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b78e362-39e7-43a6-8a13-046c45623920" (UID: "5b78e362-39e7-43a6-8a13-046c45623920"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145236 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145508 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145583 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145639 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhkmc\" (UniqueName: \"kubernetes.io/projected/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-kube-api-access-qhkmc\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145694 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvqd\" (UniqueName: \"kubernetes.io/projected/0b794f0a-8fb7-4253-8d82-40630895f983-kube-api-access-lkvqd\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145756 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145822 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9l2w\" (UniqueName: \"kubernetes.io/projected/2718cb8c-7abd-486c-85ea-964738689708-kube-api-access-h9l2w\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145882 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2718cb8c-7abd-486c-85ea-964738689708-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145935 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b78e362-39e7-43a6-8a13-046c45623920-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.145989 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxtb\" (UniqueName: \"kubernetes.io/projected/5b78e362-39e7-43a6-8a13-046c45623920-kube-api-access-sgxtb\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.146042 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b794f0a-8fb7-4253-8d82-40630895f983-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.146096 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.802920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" event={"ID":"b5fb7f2d-5841-44a3-a7cc-41b44c66cd73","Type":"ContainerStarted","Data":"a603ffe443ce3f60f2ce8262e5b4e53f25e1ead77a5c4e6c2434e7f0646e9dae"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.803289 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.808170 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s4nr" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.808189 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s4nr" event={"ID":"5b78e362-39e7-43a6-8a13-046c45623920","Type":"ContainerDied","Data":"60be9d023a6deb713d5cae93d6f44ebe52a9aea817d1545b3c5043010fc0fa35"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.809012 4782 scope.go:117] "RemoveContainer" containerID="7e4e318c6a40b2d21b3c8caf4922510ee0bc7557217d977d2b41968c5b7b6c62" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.810723 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.812776 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.816023 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvvs" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.820360 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccqrw" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.821732 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7z6sc" event={"ID":"f48964dd-f8a3-4e2d-8df8-78bbe81a68eb","Type":"ContainerDied","Data":"566860156d3864d0a3bc693103e2792f04e0b95cce84713f5f3775aa1f4e9486"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.821778 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvvs" event={"ID":"2718cb8c-7abd-486c-85ea-964738689708","Type":"ContainerDied","Data":"5f4ab4fdc557c2996d8b56fa28ee9f30ee96de0de1f208916964589dbb7aad0f"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.821806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccqrw" event={"ID":"0b794f0a-8fb7-4253-8d82-40630895f983","Type":"ContainerDied","Data":"ae932cee91095f0ed5b29eaef3731c74650f1779fc6504b3b8315bfbabf8953e"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.824699 4782 generic.go:334] "Generic (PLEG): container finished" podID="30e616a4-be07-4f13-a359-dcfcdbdec622" containerID="aba55618d39af55f7ec67a8bbc7f661dba18d364d30be1cc148219607bb50980" exitCode=0 Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.824739 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" event={"ID":"30e616a4-be07-4f13-a359-dcfcdbdec622","Type":"ContainerDied","Data":"aba55618d39af55f7ec67a8bbc7f661dba18d364d30be1cc148219607bb50980"} Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.831442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gzb6p" podStartSLOduration=8.831421301 podStartE2EDuration="8.831421301s" podCreationTimestamp="2025-11-24 12:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:00:10.827463615 +0000 UTC m=+260.071297384" watchObservedRunningTime="2025-11-24 12:00:10.831421301 +0000 UTC m=+260.075255090" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.842566 4782 scope.go:117] "RemoveContainer" containerID="dc3f0e9fe1d61a4724bc52f81494545deb745b23ae50906762468277aed237da" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.898301 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.904507 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s4nr"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920154 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xspgw"] Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920442 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920465 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920479 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920489 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920501 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920508 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920520 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920528 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920538 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920548 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920560 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920568 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920580 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920588 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920595 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920602 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="extract-utilities" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920611 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920618 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920631 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920638 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920651 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920657 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920669 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920676 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: E1124 12:00:10.920685 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.920693 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="extract-content" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.921546 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.921573 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2718cb8c-7abd-486c-85ea-964738689708" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.921585 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.921599 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b78e362-39e7-43a6-8a13-046c45623920" containerName="registry-server" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.921609 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" containerName="marketplace-operator" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.922470 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.927786 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.932882 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xspgw"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.933527 4782 scope.go:117] "RemoveContainer" containerID="e73184fcb9ce3831f23019d758a0040509bd9b5f7260d3aa817d360b3e4f7c4f" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.949474 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.958854 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wbvvs"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.961648 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.961704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-utilities\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.961731 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-catalog-content\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.961777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znzc\" (UniqueName: \"kubernetes.io/projected/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-kube-api-access-6znzc\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.963078 4782 scope.go:117] "RemoveContainer" containerID="6ed13594d1515997999a5b6952698a7aa44d4cd9b885a874aba4b23bd2123c9c" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.964097 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ccqrw"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.970085 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.972842 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7z6sc"] Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.976684 4782 scope.go:117] "RemoveContainer" containerID="b7237052192ac0bd73c88b86680023ed388ee9c49c263e10d0ee05e4cb9dde13" Nov 24 12:00:10 crc kubenswrapper[4782]: I1124 12:00:10.988615 4782 scope.go:117] "RemoveContainer" containerID="d6f91a6fa2437e47f54877ea204401cd456233603d30046928c2494dd6196129" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.002178 4782 scope.go:117] "RemoveContainer" containerID="5aee2710beed1f7153ca4fa40d8c0864781917d0e5a6491f042debd968be0df4" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.015469 4782 scope.go:117] "RemoveContainer" containerID="9b6cc2b906f04261b03bcc91f5cdbc1ca29a5082e91945fe0a5dd2efeab11293" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.027978 4782 scope.go:117] "RemoveContainer" containerID="8e264324f4a8d3fb52238fff1bb09bc4dc273431533436864326283e7762fa6c" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.040822 4782 scope.go:117] "RemoveContainer" containerID="5ddc922c45a5761ef74b317a7d05b99e8d6504e8a438a8354dc944c64bd20eea" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.062557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6znzc\" (UniqueName: \"kubernetes.io/projected/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-kube-api-access-6znzc\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.062647 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-utilities\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.062679 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-catalog-content\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.063144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-catalog-content\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.063770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-utilities\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.078555 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6znzc\" (UniqueName: \"kubernetes.io/projected/2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9-kube-api-access-6znzc\") pod \"redhat-marketplace-xspgw\" (UID: \"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9\") " pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.244844 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.499927 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b794f0a-8fb7-4253-8d82-40630895f983" path="/var/lib/kubelet/pods/0b794f0a-8fb7-4253-8d82-40630895f983/volumes" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.501002 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2718cb8c-7abd-486c-85ea-964738689708" path="/var/lib/kubelet/pods/2718cb8c-7abd-486c-85ea-964738689708/volumes" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.501837 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487eea64-5acd-4cb6-a57d-3904c3c86647" path="/var/lib/kubelet/pods/487eea64-5acd-4cb6-a57d-3904c3c86647/volumes" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.503255 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b78e362-39e7-43a6-8a13-046c45623920" path="/var/lib/kubelet/pods/5b78e362-39e7-43a6-8a13-046c45623920/volumes" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.504631 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48964dd-f8a3-4e2d-8df8-78bbe81a68eb" path="/var/lib/kubelet/pods/f48964dd-f8a3-4e2d-8df8-78bbe81a68eb/volumes" Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.697720 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xspgw"] Nov 24 12:00:11 crc kubenswrapper[4782]: I1124 12:00:11.831393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xspgw" event={"ID":"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9","Type":"ContainerStarted","Data":"cedf1d789bf31321ee393ba2c83809e5b6ce59ef61c98133fb64a0e21495f840"} Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.042253 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.173428 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume\") pod \"30e616a4-be07-4f13-a359-dcfcdbdec622\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.173498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqgvv\" (UniqueName: \"kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv\") pod \"30e616a4-be07-4f13-a359-dcfcdbdec622\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.173527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume\") pod \"30e616a4-be07-4f13-a359-dcfcdbdec622\" (UID: \"30e616a4-be07-4f13-a359-dcfcdbdec622\") " Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.175326 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume" (OuterVolumeSpecName: "config-volume") pod "30e616a4-be07-4f13-a359-dcfcdbdec622" (UID: "30e616a4-be07-4f13-a359-dcfcdbdec622"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.180715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv" (OuterVolumeSpecName: "kube-api-access-kqgvv") pod "30e616a4-be07-4f13-a359-dcfcdbdec622" (UID: "30e616a4-be07-4f13-a359-dcfcdbdec622"). InnerVolumeSpecName "kube-api-access-kqgvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.181428 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "30e616a4-be07-4f13-a359-dcfcdbdec622" (UID: "30e616a4-be07-4f13-a359-dcfcdbdec622"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.280667 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.281071 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30e616a4-be07-4f13-a359-dcfcdbdec622-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.281148 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e616a4-be07-4f13-a359-dcfcdbdec622-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.281211 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqgvv\" (UniqueName: \"kubernetes.io/projected/30e616a4-be07-4f13-a359-dcfcdbdec622-kube-api-access-kqgvv\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:12 crc kubenswrapper[4782]: E1124 12:00:12.281286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e616a4-be07-4f13-a359-dcfcdbdec622" containerName="collect-profiles" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.281306 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e616a4-be07-4f13-a359-dcfcdbdec622" containerName="collect-profiles" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.282431 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e616a4-be07-4f13-a359-dcfcdbdec622" containerName="collect-profiles" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.295278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.295666 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.299226 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.483460 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.483571 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.483657 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7njg\" (UniqueName: \"kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.584925 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.585052 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.585214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7njg\" (UniqueName: \"kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.585864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.586208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.605015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7njg\" (UniqueName: \"kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg\") pod \"redhat-operators-6vswm\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.640290 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.842300 4782 generic.go:334] "Generic (PLEG): container finished" podID="2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9" containerID="f0a59abbdf0475596776b60218f4a5774935b2e40f9e1c4eb73f7f01318e4071" exitCode=0 Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.842549 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xspgw" event={"ID":"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9","Type":"ContainerDied","Data":"f0a59abbdf0475596776b60218f4a5774935b2e40f9e1c4eb73f7f01318e4071"} Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.846531 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.846953 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd" event={"ID":"30e616a4-be07-4f13-a359-dcfcdbdec622","Type":"ContainerDied","Data":"39e9175d652b73a83fa53269c4a718798153c99599c7c79829ae08104624dee1"} Nov 24 12:00:12 crc kubenswrapper[4782]: I1124 12:00:12.846974 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e9175d652b73a83fa53269c4a718798153c99599c7c79829ae08104624dee1" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.036660 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.274917 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.276182 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.280021 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.291645 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.308830 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklbl\" (UniqueName: \"kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.309274 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.309844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.412312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.412391 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklbl\" (UniqueName: \"kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.412416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.412823 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.412848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.444407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklbl\" (UniqueName: \"kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl\") pod \"community-operators-8kgk8\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.606716 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.853334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerDied","Data":"5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a"} Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.853247 4782 generic.go:334] "Generic (PLEG): container finished" podID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerID="5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a" exitCode=0 Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.854669 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerStarted","Data":"ae4a71e9d0b2d589b204b0c4567f475b5b302819933bbdb86213eb3398d6cb45"} Nov 24 12:00:13 crc kubenswrapper[4782]: I1124 12:00:13.988979 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.677140 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dfdl"] Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.678594 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.680617 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.686767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dfdl"] Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.725931 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-catalog-content\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.725992 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-utilities\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.726065 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qtw\" (UniqueName: \"kubernetes.io/projected/991dd9ae-cb8c-4f12-8568-fc7de0593214-kube-api-access-k4qtw\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.827824 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-catalog-content\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.827869 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-utilities\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.827900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4qtw\" (UniqueName: \"kubernetes.io/projected/991dd9ae-cb8c-4f12-8568-fc7de0593214-kube-api-access-k4qtw\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.828365 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-utilities\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.828424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991dd9ae-cb8c-4f12-8568-fc7de0593214-catalog-content\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.849327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4qtw\" (UniqueName: \"kubernetes.io/projected/991dd9ae-cb8c-4f12-8568-fc7de0593214-kube-api-access-k4qtw\") pod \"certified-operators-9dfdl\" (UID: \"991dd9ae-cb8c-4f12-8568-fc7de0593214\") " pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.860147 4782 generic.go:334] "Generic (PLEG): container finished" podID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerID="e07a0a03f5c187d6dee1dc13375c351e27bd7852841f912867c2d5947780422a" exitCode=0 Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.860192 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerDied","Data":"e07a0a03f5c187d6dee1dc13375c351e27bd7852841f912867c2d5947780422a"} Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.860222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerStarted","Data":"7973d0a92044ed006b40d7aae750f637f17d0c53b8316cb558389f6392461e5d"} Nov 24 12:00:14 crc kubenswrapper[4782]: I1124 12:00:14.994481 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:15 crc kubenswrapper[4782]: I1124 12:00:15.184250 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dfdl"] Nov 24 12:00:15 crc kubenswrapper[4782]: I1124 12:00:15.868956 4782 generic.go:334] "Generic (PLEG): container finished" podID="991dd9ae-cb8c-4f12-8568-fc7de0593214" containerID="4f115d2546a927ae6b4095b6671109a7041b93b75ca470206e08fa5d09403e45" exitCode=0 Nov 24 12:00:15 crc kubenswrapper[4782]: I1124 12:00:15.870802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dfdl" event={"ID":"991dd9ae-cb8c-4f12-8568-fc7de0593214","Type":"ContainerDied","Data":"4f115d2546a927ae6b4095b6671109a7041b93b75ca470206e08fa5d09403e45"} Nov 24 12:00:15 crc kubenswrapper[4782]: I1124 12:00:15.871282 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dfdl" event={"ID":"991dd9ae-cb8c-4f12-8568-fc7de0593214","Type":"ContainerStarted","Data":"d2404a9a1144b0681dd3216192e9cd0ea9f5d1e3a753e3f2bf944df3ae8eb2bd"} Nov 24 12:00:16 crc kubenswrapper[4782]: I1124 12:00:16.877510 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerStarted","Data":"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832"} Nov 24 12:00:16 crc kubenswrapper[4782]: I1124 12:00:16.879785 4782 generic.go:334] "Generic (PLEG): container finished" podID="2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9" containerID="b9d1623ca4b03886ca9e7cf60ec15596a9cae99586d77474d533f0c636992c73" exitCode=0 Nov 24 12:00:16 crc kubenswrapper[4782]: I1124 12:00:16.879813 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xspgw" event={"ID":"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9","Type":"ContainerDied","Data":"b9d1623ca4b03886ca9e7cf60ec15596a9cae99586d77474d533f0c636992c73"} Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.886973 4782 generic.go:334] "Generic (PLEG): container finished" podID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerID="2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832" exitCode=0 Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.887076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerDied","Data":"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832"} Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.895997 4782 generic.go:334] "Generic (PLEG): container finished" podID="991dd9ae-cb8c-4f12-8568-fc7de0593214" containerID="091ef159d3ee7ae5fac9d6cefe9e0ad589fbbbc14342d0da19a1bdeceb7644ec" exitCode=0 Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.896084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dfdl" event={"ID":"991dd9ae-cb8c-4f12-8568-fc7de0593214","Type":"ContainerDied","Data":"091ef159d3ee7ae5fac9d6cefe9e0ad589fbbbc14342d0da19a1bdeceb7644ec"} Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.898281 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xspgw" event={"ID":"2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9","Type":"ContainerStarted","Data":"3138eb260c431cb38047b2053e3adb0da7cb6becf0d2d475476f64f75d1be173"} Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.899857 4782 generic.go:334] "Generic (PLEG): container finished" podID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerID="7e97f83a2715df3284705d6cc51b9d0ff852beaa079033a6f1b8a294e23c7fc4" exitCode=0 Nov 24 12:00:17 crc kubenswrapper[4782]: I1124 12:00:17.899892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerDied","Data":"7e97f83a2715df3284705d6cc51b9d0ff852beaa079033a6f1b8a294e23c7fc4"} Nov 24 12:00:18 crc kubenswrapper[4782]: I1124 12:00:18.012172 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xspgw" podStartSLOduration=3.292817727 podStartE2EDuration="8.01215526s" podCreationTimestamp="2025-11-24 12:00:10 +0000 UTC" firstStartedPulling="2025-11-24 12:00:12.845708395 +0000 UTC m=+262.089542164" lastFinishedPulling="2025-11-24 12:00:17.565045928 +0000 UTC m=+266.808879697" observedRunningTime="2025-11-24 12:00:18.009487548 +0000 UTC m=+267.253321317" watchObservedRunningTime="2025-11-24 12:00:18.01215526 +0000 UTC m=+267.255989029" Nov 24 12:00:18 crc kubenswrapper[4782]: I1124 12:00:18.917900 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerStarted","Data":"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13"} Nov 24 12:00:18 crc kubenswrapper[4782]: I1124 12:00:18.925116 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dfdl" event={"ID":"991dd9ae-cb8c-4f12-8568-fc7de0593214","Type":"ContainerStarted","Data":"5b90ff8f5c8e54ebcf24fab9a4c3b1a0df6cba786144c9f566251c21693331c1"} Nov 24 12:00:18 crc kubenswrapper[4782]: I1124 12:00:18.939904 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vswm" podStartSLOduration=2.407031163 podStartE2EDuration="6.939885798s" podCreationTimestamp="2025-11-24 12:00:12 +0000 UTC" firstStartedPulling="2025-11-24 12:00:13.854779079 +0000 UTC m=+263.098612848" lastFinishedPulling="2025-11-24 12:00:18.387633724 +0000 UTC m=+267.631467483" observedRunningTime="2025-11-24 12:00:18.938587688 +0000 UTC m=+268.182421487" watchObservedRunningTime="2025-11-24 12:00:18.939885798 +0000 UTC m=+268.183719567" Nov 24 12:00:18 crc kubenswrapper[4782]: I1124 12:00:18.960983 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dfdl" podStartSLOduration=2.439167333 podStartE2EDuration="4.960961787s" podCreationTimestamp="2025-11-24 12:00:14 +0000 UTC" firstStartedPulling="2025-11-24 12:00:15.971190138 +0000 UTC m=+265.215023907" lastFinishedPulling="2025-11-24 12:00:18.492984592 +0000 UTC m=+267.736818361" observedRunningTime="2025-11-24 12:00:18.95909394 +0000 UTC m=+268.202927709" watchObservedRunningTime="2025-11-24 12:00:18.960961787 +0000 UTC m=+268.204795566" Nov 24 12:00:20 crc kubenswrapper[4782]: I1124 12:00:20.937187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerStarted","Data":"fc6b43878c5c68c368ce8f7b965c92c0b232bc6fe8a4169e4ecc342b65fd0136"} Nov 24 12:00:21 crc kubenswrapper[4782]: I1124 12:00:21.245606 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:21 crc kubenswrapper[4782]: I1124 12:00:21.245653 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:21 crc kubenswrapper[4782]: I1124 12:00:21.292496 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:21 crc kubenswrapper[4782]: I1124 12:00:21.309169 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8kgk8" podStartSLOduration=4.428351354 podStartE2EDuration="8.30915136s" podCreationTimestamp="2025-11-24 12:00:13 +0000 UTC" firstStartedPulling="2025-11-24 12:00:14.861549353 +0000 UTC m=+264.105383122" lastFinishedPulling="2025-11-24 12:00:18.742349359 +0000 UTC m=+267.986183128" observedRunningTime="2025-11-24 12:00:20.966224039 +0000 UTC m=+270.210057818" watchObservedRunningTime="2025-11-24 12:00:21.30915136 +0000 UTC m=+270.552985129" Nov 24 12:00:22 crc kubenswrapper[4782]: I1124 12:00:22.644449 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:22 crc kubenswrapper[4782]: I1124 12:00:22.644783 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:23 crc kubenswrapper[4782]: I1124 12:00:23.606861 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:23 crc kubenswrapper[4782]: I1124 12:00:23.607731 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:23 crc kubenswrapper[4782]: I1124 12:00:23.647002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:00:23 crc kubenswrapper[4782]: I1124 12:00:23.686683 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vswm" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="registry-server" probeResult="failure" output=< Nov 24 12:00:23 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:00:23 crc kubenswrapper[4782]: > Nov 24 12:00:24 crc kubenswrapper[4782]: I1124 12:00:24.995419 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:24 crc kubenswrapper[4782]: I1124 12:00:24.995752 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:25 crc kubenswrapper[4782]: I1124 12:00:25.031059 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:26 crc kubenswrapper[4782]: I1124 12:00:26.006715 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dfdl" Nov 24 12:00:31 crc kubenswrapper[4782]: I1124 12:00:31.288351 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xspgw" Nov 24 12:00:32 crc kubenswrapper[4782]: I1124 12:00:32.705346 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:32 crc kubenswrapper[4782]: I1124 12:00:32.751238 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 12:00:33 crc kubenswrapper[4782]: I1124 12:00:33.663404 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:01:30 crc kubenswrapper[4782]: I1124 12:01:30.410533 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:01:30 crc kubenswrapper[4782]: I1124 12:01:30.411436 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:00 crc kubenswrapper[4782]: I1124 12:02:00.410933 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:00 crc kubenswrapper[4782]: I1124 12:02:00.411599 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.410950 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.411629 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.411694 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.412501 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.412582 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78" gracePeriod=600 Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.697789 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78" exitCode=0 Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.697903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78"} Nov 24 12:02:30 crc kubenswrapper[4782]: I1124 12:02:30.697973 4782 scope.go:117] "RemoveContainer" containerID="545f97e76e968225410d351c1761adb35193180d97e5a91237632e1ec832fc97" Nov 24 12:02:31 crc kubenswrapper[4782]: I1124 12:02:31.708446 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922"} Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.505997 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-flg2g"] Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.507417 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.518311 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-flg2g"] Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636592 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-bound-sa-token\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636641 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-tls\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-trusted-ca\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvl8\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-kube-api-access-fhvl8\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-ca-trust-extracted\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636729 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-certificates\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636753 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-installation-pull-secrets\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.636795 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.662451 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738200 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-ca-trust-extracted\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738247 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-certificates\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738273 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-installation-pull-secrets\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738334 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-bound-sa-token\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738353 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-trusted-ca\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-tls\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvl8\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-kube-api-access-fhvl8\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.738904 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-ca-trust-extracted\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.739647 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-trusted-ca\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.739694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-certificates\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.744685 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-registry-tls\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.749129 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-installation-pull-secrets\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.757831 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-bound-sa-token\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.758032 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvl8\" (UniqueName: \"kubernetes.io/projected/7b6a1cdf-a887-4b0e-b40e-ecc540ca5044-kube-api-access-fhvl8\") pod \"image-registry-66df7c8f76-flg2g\" (UID: \"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044\") " pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:43 crc kubenswrapper[4782]: I1124 12:03:43.831672 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:44 crc kubenswrapper[4782]: I1124 12:03:44.014296 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-flg2g"] Nov 24 12:03:44 crc kubenswrapper[4782]: I1124 12:03:44.119821 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" event={"ID":"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044","Type":"ContainerStarted","Data":"3df21fd4acbfb88b14bdb0fe548695f3cefe9b82dc2d3ca4dde08d9f63498a51"} Nov 24 12:03:45 crc kubenswrapper[4782]: I1124 12:03:45.126657 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" event={"ID":"7b6a1cdf-a887-4b0e-b40e-ecc540ca5044","Type":"ContainerStarted","Data":"e45d006902796bec27c0c0e78566e307cbe79a5d996f9be36c966c416a885ec4"} Nov 24 12:03:45 crc kubenswrapper[4782]: I1124 12:03:45.126998 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:03:45 crc kubenswrapper[4782]: I1124 12:03:45.152623 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" podStartSLOduration=2.15259774 podStartE2EDuration="2.15259774s" podCreationTimestamp="2025-11-24 12:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:45.145443016 +0000 UTC m=+474.389276835" watchObservedRunningTime="2025-11-24 12:03:45.15259774 +0000 UTC m=+474.396431519" Nov 24 12:04:03 crc kubenswrapper[4782]: I1124 12:04:03.838585 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-flg2g" Nov 24 12:04:03 crc kubenswrapper[4782]: I1124 12:04:03.927182 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 12:04:28 crc kubenswrapper[4782]: I1124 12:04:28.970552 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" podUID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" containerName="registry" containerID="cri-o://40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4" gracePeriod=30 Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.331438 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.372685 4782 generic.go:334] "Generic (PLEG): container finished" podID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" containerID="40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4" exitCode=0 Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.372745 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" event={"ID":"d1b47a4d-ace6-4560-89a5-b3e3ce247c74","Type":"ContainerDied","Data":"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4"} Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.372746 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.372773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4cfz9" event={"ID":"d1b47a4d-ace6-4560-89a5-b3e3ce247c74","Type":"ContainerDied","Data":"b0c9015e2ac2449a65b07b13d6d00e526a30dbd3cb51be36ee30b0270f3c5aa0"} Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.372789 4782 scope.go:117] "RemoveContainer" containerID="40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.389925 4782 scope.go:117] "RemoveContainer" containerID="40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4" Nov 24 12:04:29 crc kubenswrapper[4782]: E1124 12:04:29.390462 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4\": container with ID starting with 40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4 not found: ID does not exist" containerID="40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.390529 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4"} err="failed to get container status \"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4\": rpc error: code = NotFound desc = could not find container \"40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4\": container with ID starting with 40403a44df6bc27630eb6e6cfcb8af26ce0a41e7f1d79168d1c5912eb9c95bf4 not found: ID does not exist" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.502473 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.502924 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.503093 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.503258 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504276 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65n5b\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504325 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504352 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504407 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token\") pod \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\" (UID: \"d1b47a4d-ace6-4560-89a5-b3e3ce247c74\") " Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.504963 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.505925 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.511262 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b" (OuterVolumeSpecName: "kube-api-access-65n5b") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "kube-api-access-65n5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.512712 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.513049 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.513156 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.521259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.521545 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d1b47a4d-ace6-4560-89a5-b3e3ce247c74" (UID: "d1b47a4d-ace6-4560-89a5-b3e3ce247c74"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.605831 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.606047 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.606056 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.606066 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.606075 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65n5b\" (UniqueName: \"kubernetes.io/projected/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-kube-api-access-65n5b\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.606084 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1b47a4d-ace6-4560-89a5-b3e3ce247c74-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.705983 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 12:04:29 crc kubenswrapper[4782]: I1124 12:04:29.709701 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4cfz9"] Nov 24 12:04:30 crc kubenswrapper[4782]: I1124 12:04:30.410485 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:04:30 crc kubenswrapper[4782]: I1124 12:04:30.410533 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:04:31 crc kubenswrapper[4782]: I1124 12:04:31.503288 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" path="/var/lib/kubelet/pods/d1b47a4d-ace6-4560-89a5-b3e3ce247c74/volumes" Nov 24 12:04:51 crc kubenswrapper[4782]: I1124 12:04:51.678028 4782 scope.go:117] "RemoveContainer" containerID="0a86d41d306738ce032464c78fcaa2be6319e8fed3b53518b1e5a89ee3037cd9" Nov 24 12:05:00 crc kubenswrapper[4782]: I1124 12:05:00.410468 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:05:00 crc kubenswrapper[4782]: I1124 12:05:00.411079 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.410784 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.411342 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.411397 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.411854 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.411898 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922" gracePeriod=600 Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.731929 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922" exitCode=0 Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.731960 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922"} Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.732280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0"} Nov 24 12:05:30 crc kubenswrapper[4782]: I1124 12:05:30.732300 4782 scope.go:117] "RemoveContainer" containerID="94138e351ef9369d7758b9a7396f3fb07dad56abcb5a94c945494ac177a2cb78" Nov 24 12:05:51 crc kubenswrapper[4782]: I1124 12:05:51.728787 4782 scope.go:117] "RemoveContainer" containerID="730cc0ba393d1fdc85aa23f8d58876318b77726717260c1c3fe11881a920b9f9" Nov 24 12:05:51 crc kubenswrapper[4782]: I1124 12:05:51.749507 4782 scope.go:117] "RemoveContainer" containerID="7ff062d9c5460ae3ecdd51ef69cbc5c3401c28c7b799b5a85fb804398d46ccf8" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.127043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmfht"] Nov 24 12:05:57 crc kubenswrapper[4782]: E1124 12:05:57.127658 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" containerName="registry" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.127674 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" containerName="registry" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.127799 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b47a4d-ace6-4560-89a5-b3e3ce247c74" containerName="registry" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.128236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.129704 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pclvr" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.130614 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.130622 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.137664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vl9w\" (UniqueName: \"kubernetes.io/projected/3df46084-4a7d-46f9-9b83-0980a55f1752-kube-api-access-5vl9w\") pod \"cert-manager-cainjector-7f985d654d-tmfht\" (UID: \"3df46084-4a7d-46f9-9b83-0980a55f1752\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.147577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmfht"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.151281 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4dr44"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.152104 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-4dr44" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.155167 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-b95cl" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.173234 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4dr44"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.179395 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2k9fj"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.180234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.182405 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zqmd2" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.197823 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2k9fj"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.239225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vl9w\" (UniqueName: \"kubernetes.io/projected/3df46084-4a7d-46f9-9b83-0980a55f1752-kube-api-access-5vl9w\") pod \"cert-manager-cainjector-7f985d654d-tmfht\" (UID: \"3df46084-4a7d-46f9-9b83-0980a55f1752\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.239278 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tt2\" (UniqueName: \"kubernetes.io/projected/5598822c-dc55-41dd-bb17-7657376575e7-kube-api-access-q6tt2\") pod \"cert-manager-5b446d88c5-4dr44\" (UID: \"5598822c-dc55-41dd-bb17-7657376575e7\") " pod="cert-manager/cert-manager-5b446d88c5-4dr44" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.239423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrshb\" (UniqueName: \"kubernetes.io/projected/4070eb87-d044-4a58-8a71-1a9a53cc0ad2-kube-api-access-jrshb\") pod \"cert-manager-webhook-5655c58dd6-2k9fj\" (UID: \"4070eb87-d044-4a58-8a71-1a9a53cc0ad2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.258010 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vl9w\" (UniqueName: \"kubernetes.io/projected/3df46084-4a7d-46f9-9b83-0980a55f1752-kube-api-access-5vl9w\") pod \"cert-manager-cainjector-7f985d654d-tmfht\" (UID: \"3df46084-4a7d-46f9-9b83-0980a55f1752\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.340499 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrshb\" (UniqueName: \"kubernetes.io/projected/4070eb87-d044-4a58-8a71-1a9a53cc0ad2-kube-api-access-jrshb\") pod \"cert-manager-webhook-5655c58dd6-2k9fj\" (UID: \"4070eb87-d044-4a58-8a71-1a9a53cc0ad2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.340561 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tt2\" (UniqueName: \"kubernetes.io/projected/5598822c-dc55-41dd-bb17-7657376575e7-kube-api-access-q6tt2\") pod \"cert-manager-5b446d88c5-4dr44\" (UID: \"5598822c-dc55-41dd-bb17-7657376575e7\") " pod="cert-manager/cert-manager-5b446d88c5-4dr44" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.356787 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrshb\" (UniqueName: \"kubernetes.io/projected/4070eb87-d044-4a58-8a71-1a9a53cc0ad2-kube-api-access-jrshb\") pod \"cert-manager-webhook-5655c58dd6-2k9fj\" (UID: \"4070eb87-d044-4a58-8a71-1a9a53cc0ad2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.366206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tt2\" (UniqueName: \"kubernetes.io/projected/5598822c-dc55-41dd-bb17-7657376575e7-kube-api-access-q6tt2\") pod \"cert-manager-5b446d88c5-4dr44\" (UID: \"5598822c-dc55-41dd-bb17-7657376575e7\") " pod="cert-manager/cert-manager-5b446d88c5-4dr44" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.447466 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.470932 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-4dr44" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.493783 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.671507 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmfht"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.684989 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:05:57 crc kubenswrapper[4782]: W1124 12:05:57.740199 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5598822c_dc55_41dd_bb17_7657376575e7.slice/crio-4e8fefa321acb5529147a4b987f4a0e3d0e8bdd90efcad2c711b1bc6994ab850 WatchSource:0}: Error finding container 4e8fefa321acb5529147a4b987f4a0e3d0e8bdd90efcad2c711b1bc6994ab850: Status 404 returned error can't find the container with id 4e8fefa321acb5529147a4b987f4a0e3d0e8bdd90efcad2c711b1bc6994ab850 Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.742825 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4dr44"] Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.774899 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2k9fj"] Nov 24 12:05:57 crc kubenswrapper[4782]: W1124 12:05:57.775577 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4070eb87_d044_4a58_8a71_1a9a53cc0ad2.slice/crio-1942a9649de881e3042aadcf2f8fbcb086af98b2de3fa7f27f3d75309d5ffef8 WatchSource:0}: Error finding container 1942a9649de881e3042aadcf2f8fbcb086af98b2de3fa7f27f3d75309d5ffef8: Status 404 returned error can't find the container with id 1942a9649de881e3042aadcf2f8fbcb086af98b2de3fa7f27f3d75309d5ffef8 Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.965485 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" event={"ID":"3df46084-4a7d-46f9-9b83-0980a55f1752","Type":"ContainerStarted","Data":"73a895f72ddbea84778fdcf50ae42c0ecfebca402f8c9dbd64895fe8d6a4ad17"} Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.966533 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" event={"ID":"4070eb87-d044-4a58-8a71-1a9a53cc0ad2","Type":"ContainerStarted","Data":"1942a9649de881e3042aadcf2f8fbcb086af98b2de3fa7f27f3d75309d5ffef8"} Nov 24 12:05:57 crc kubenswrapper[4782]: I1124 12:05:57.967418 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-4dr44" event={"ID":"5598822c-dc55-41dd-bb17-7657376575e7","Type":"ContainerStarted","Data":"4e8fefa321acb5529147a4b987f4a0e3d0e8bdd90efcad2c711b1bc6994ab850"} Nov 24 12:06:04 crc kubenswrapper[4782]: I1124 12:06:04.000996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" event={"ID":"3df46084-4a7d-46f9-9b83-0980a55f1752","Type":"ContainerStarted","Data":"13065bc3571fa8b6d13fcba5a8763d8cc48f49a1d74032913b71df77053f6143"} Nov 24 12:06:04 crc kubenswrapper[4782]: I1124 12:06:04.015120 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmfht" podStartSLOduration=1.555309511 podStartE2EDuration="7.01510206s" podCreationTimestamp="2025-11-24 12:05:57 +0000 UTC" firstStartedPulling="2025-11-24 12:05:57.684743188 +0000 UTC m=+606.928576957" lastFinishedPulling="2025-11-24 12:06:03.144535737 +0000 UTC m=+612.388369506" observedRunningTime="2025-11-24 12:06:04.013670982 +0000 UTC m=+613.257504751" watchObservedRunningTime="2025-11-24 12:06:04.01510206 +0000 UTC m=+613.258935829" Nov 24 12:06:05 crc kubenswrapper[4782]: I1124 12:06:05.015693 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" event={"ID":"4070eb87-d044-4a58-8a71-1a9a53cc0ad2","Type":"ContainerStarted","Data":"ec6f603789e360b9149fcbb1d73e6b5bf18c2d237a7f34a6d3de306602f397d1"} Nov 24 12:06:05 crc kubenswrapper[4782]: I1124 12:06:05.016921 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:06:05 crc kubenswrapper[4782]: I1124 12:06:05.027756 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-4dr44" event={"ID":"5598822c-dc55-41dd-bb17-7657376575e7","Type":"ContainerStarted","Data":"32fd8412bed952f83f494e95c60667928800a64188c730e61c7206a95f6bcb90"} Nov 24 12:06:05 crc kubenswrapper[4782]: I1124 12:06:05.039082 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" podStartSLOduration=1.8642464909999998 podStartE2EDuration="8.039067581s" podCreationTimestamp="2025-11-24 12:05:57 +0000 UTC" firstStartedPulling="2025-11-24 12:05:57.776820556 +0000 UTC m=+607.020654325" lastFinishedPulling="2025-11-24 12:06:03.951641646 +0000 UTC m=+613.195475415" observedRunningTime="2025-11-24 12:06:05.03789855 +0000 UTC m=+614.281732319" watchObservedRunningTime="2025-11-24 12:06:05.039067581 +0000 UTC m=+614.282901360" Nov 24 12:06:05 crc kubenswrapper[4782]: I1124 12:06:05.055884 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-4dr44" podStartSLOduration=1.836297335 podStartE2EDuration="8.055861869s" podCreationTimestamp="2025-11-24 12:05:57 +0000 UTC" firstStartedPulling="2025-11-24 12:05:57.743651131 +0000 UTC m=+606.987484900" lastFinishedPulling="2025-11-24 12:06:03.963215665 +0000 UTC m=+613.207049434" observedRunningTime="2025-11-24 12:06:05.055230752 +0000 UTC m=+614.299064541" watchObservedRunningTime="2025-11-24 12:06:05.055861869 +0000 UTC m=+614.299695668" Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.485209 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zzzxx"] Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.485950 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="nbdb" containerID="cri-o://dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.486062 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="sbdb" containerID="cri-o://e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.486053 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-node" containerID="cri-o://15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.486054 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-acl-logging" containerID="cri-o://5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.486120 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="northd" containerID="cri-o://1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.486184 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.485917 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-controller" containerID="cri-o://8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a" gracePeriod=30 Nov 24 12:06:07 crc kubenswrapper[4782]: I1124 12:06:07.530898 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" containerID="cri-o://54b478eb651af6a52102f381e792521d9775fc8fad54899e20bb47909f65f992" gracePeriod=30 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.043690 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovnkube-controller/3.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.046189 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-acl-logging/0.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.046866 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-controller/0.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047281 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="54b478eb651af6a52102f381e792521d9775fc8fad54899e20bb47909f65f992" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047305 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047314 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047324 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047336 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047349 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652" exitCode=0 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047361 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b" exitCode=143 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047393 4782 generic.go:334] "Generic (PLEG): container finished" podID="1de863b0-02f8-435c-9669-4ea856b352d8" containerID="8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a" exitCode=143 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047417 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"54b478eb651af6a52102f381e792521d9775fc8fad54899e20bb47909f65f992"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047460 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047490 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047518 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047553 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.047506 4782 scope.go:117] "RemoveContainer" containerID="ae95f68f46b691271cc54fbd4b7d99451e74f2f19b63d5adedb70cb076909e3e" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.049342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/2.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.049824 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/1.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.049856 4782 generic.go:334] "Generic (PLEG): container finished" podID="56de1ffb-9734-4992-b477-591dfae5ad41" containerID="d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd" exitCode=2 Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.049877 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerDied","Data":"d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd"} Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.050326 4782 scope.go:117] "RemoveContainer" containerID="d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.050555 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fp44f_openshift-multus(56de1ffb-9734-4992-b477-591dfae5ad41)\"" pod="openshift-multus/multus-fp44f" podUID="56de1ffb-9734-4992-b477-591dfae5ad41" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.103759 4782 scope.go:117] "RemoveContainer" containerID="43f096565b38208b2705da6ff8bb1254b7a3affbb870a7e447f3a49644fc30a8" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.229046 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-acl-logging/0.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.229529 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-controller/0.log" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.230004 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282139 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d9b5m"] Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282329 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="northd" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282340 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="northd" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282351 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282357 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282364 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282386 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282394 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282400 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282406 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282414 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282425 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-acl-logging" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282432 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-acl-logging" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282442 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282447 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282453 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-node" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282459 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-node" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282468 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kubecfg-setup" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282474 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kubecfg-setup" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282480 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="nbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282485 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="nbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282493 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="sbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282498 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="sbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282594 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="nbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282603 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282609 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovn-acl-logging" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282617 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282626 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282632 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="sbdb" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282639 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282647 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282654 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="northd" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282662 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282670 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="kube-rbac-proxy-node" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282755 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282762 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: E1124 12:06:08.282769 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282774 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.282870 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" containerName="ovnkube-controller" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.284704 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.386941 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387001 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387075 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387108 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km4xp\" (UniqueName: \"kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387186 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387223 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387235 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387262 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387289 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash" (OuterVolumeSpecName: "host-slash") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387320 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387532 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387642 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387675 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387704 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387722 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387792 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387808 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387848 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log" (OuterVolumeSpecName: "node-log") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387857 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387902 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387932 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.387972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib\") pod \"1de863b0-02f8-435c-9669-4ea856b352d8\" (UID: \"1de863b0-02f8-435c-9669-4ea856b352d8\") " Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-kubelet\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388092 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovn-node-metrics-cert\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-etc-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388127 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-systemd-units\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-log-socket\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388188 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-node-log\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388209 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrfz\" (UniqueName: \"kubernetes.io/projected/85376c9f-1211-4f3b-b5f9-db8a371dc37c-kube-api-access-bmrfz\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388232 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-slash\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388282 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-netd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388304 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-ovn\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388326 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-env-overrides\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388347 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-systemd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-bin\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388413 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388421 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388437 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-netns\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388456 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket" (OuterVolumeSpecName: "log-socket") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388458 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-var-lib-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388491 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388511 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388521 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-script-lib\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388596 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-config\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388814 4782 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388835 4782 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388848 4782 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388859 4782 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388871 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388881 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388891 4782 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388902 4782 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388912 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388922 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388933 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388945 4782 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388958 4782 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388970 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388981 4782 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.388990 4782 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.389001 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1de863b0-02f8-435c-9669-4ea856b352d8-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.393263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.393474 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp" (OuterVolumeSpecName: "kube-api-access-km4xp") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "kube-api-access-km4xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.400911 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1de863b0-02f8-435c-9669-4ea856b352d8" (UID: "1de863b0-02f8-435c-9669-4ea856b352d8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-kubelet\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489431 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovn-node-metrics-cert\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-etc-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-systemd-units\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-kubelet\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489519 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-log-socket\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489483 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-log-socket\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-node-log\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489561 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-systemd-units\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489573 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmrfz\" (UniqueName: \"kubernetes.io/projected/85376c9f-1211-4f3b-b5f9-db8a371dc37c-kube-api-access-bmrfz\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489595 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489598 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-node-log\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489614 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-slash\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-etc-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489651 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-netd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-slash\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489633 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-netd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489708 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-ovn\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-env-overrides\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489744 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-systemd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489745 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-ovn\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-bin\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489780 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-netns\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489816 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-var-lib-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-script-lib\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-config\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489904 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km4xp\" (UniqueName: \"kubernetes.io/projected/1de863b0-02f8-435c-9669-4ea856b352d8-kube-api-access-km4xp\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489914 4782 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1de863b0-02f8-435c-9669-4ea856b352d8-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.489960 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1de863b0-02f8-435c-9669-4ea856b352d8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490003 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-netns\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490036 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-systemd\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490078 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-cni-bin\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490107 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-host-run-ovn-kubernetes\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-run-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85376c9f-1211-4f3b-b5f9-db8a371dc37c-var-lib-openvswitch\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-env-overrides\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-config\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.490874 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovnkube-script-lib\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.492768 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85376c9f-1211-4f3b-b5f9-db8a371dc37c-ovn-node-metrics-cert\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.517088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmrfz\" (UniqueName: \"kubernetes.io/projected/85376c9f-1211-4f3b-b5f9-db8a371dc37c-kube-api-access-bmrfz\") pod \"ovnkube-node-d9b5m\" (UID: \"85376c9f-1211-4f3b-b5f9-db8a371dc37c\") " pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: I1124 12:06:08.606340 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:08 crc kubenswrapper[4782]: W1124 12:06:08.623669 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85376c9f_1211_4f3b_b5f9_db8a371dc37c.slice/crio-76d4061dbf60c1edb44b90861fbdb4bc1ebf8f2fbbb72bf155e85e37d8d9fec0 WatchSource:0}: Error finding container 76d4061dbf60c1edb44b90861fbdb4bc1ebf8f2fbbb72bf155e85e37d8d9fec0: Status 404 returned error can't find the container with id 76d4061dbf60c1edb44b90861fbdb4bc1ebf8f2fbbb72bf155e85e37d8d9fec0 Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.059144 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-acl-logging/0.log" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.060021 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zzzxx_1de863b0-02f8-435c-9669-4ea856b352d8/ovn-controller/0.log" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.060842 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.061471 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzzxx" event={"ID":"1de863b0-02f8-435c-9669-4ea856b352d8","Type":"ContainerDied","Data":"05451d65c5f48ea004654d71a092f74208c9c1b465208c341ca64406bd5a6dc8"} Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.061510 4782 scope.go:117] "RemoveContainer" containerID="54b478eb651af6a52102f381e792521d9775fc8fad54899e20bb47909f65f992" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.063207 4782 generic.go:334] "Generic (PLEG): container finished" podID="85376c9f-1211-4f3b-b5f9-db8a371dc37c" containerID="0d85d87085160e1d54848cfe8d5b001a5b9cddfe410a67e7fa7b8bb5ae2584a7" exitCode=0 Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.063326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerDied","Data":"0d85d87085160e1d54848cfe8d5b001a5b9cddfe410a67e7fa7b8bb5ae2584a7"} Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.063478 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"76d4061dbf60c1edb44b90861fbdb4bc1ebf8f2fbbb72bf155e85e37d8d9fec0"} Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.068962 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/2.log" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.099138 4782 scope.go:117] "RemoveContainer" containerID="e0741186d04a41c56338441c64284f6361730b6b1579e75764824ec6cd2727b8" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.128162 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zzzxx"] Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.128216 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zzzxx"] Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.132197 4782 scope.go:117] "RemoveContainer" containerID="dcc994f495166b9facaca4d11840038f20415dcb681a751c872c6f9ebd6f94e5" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.151715 4782 scope.go:117] "RemoveContainer" containerID="1407ffcd3314bbb21709e7ced75af8ca3ef76175eaa3bdb2752992aa9e74dabd" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.177977 4782 scope.go:117] "RemoveContainer" containerID="c973c49855f661d0c4d41d5fec5bc8d3943eaf1d5794633a1ddd424d86e24027" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.189944 4782 scope.go:117] "RemoveContainer" containerID="15338425082e9beecbcf1e375ac43217b99a1164b78ff185063882005884c652" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.202490 4782 scope.go:117] "RemoveContainer" containerID="5e90519c045cd316304e689af6888ebdd181afecb384b751460f424988b7282b" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.219410 4782 scope.go:117] "RemoveContainer" containerID="8a891c92e52d18d05d45dd27f2c4491b83e4b0f9447ced10e5045a78fd68085a" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.234539 4782 scope.go:117] "RemoveContainer" containerID="bee82b21e4194a7cbdca247e87912660b5db90c60de703c7c379f19ba630c29f" Nov 24 12:06:09 crc kubenswrapper[4782]: I1124 12:06:09.498836 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de863b0-02f8-435c-9669-4ea856b352d8" path="/var/lib/kubelet/pods/1de863b0-02f8-435c-9669-4ea856b352d8/volumes" Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.081721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"c8a971f10d1b9fb27176214b5eb9f6706eb035241a2e78f0b13db7a9ea7280e2"} Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.082010 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"85bf30dcdf57843561c2ed9590ce0adbb74d7253f744b1fdccfbe05a1311f44a"} Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.082164 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"5ac9a8729fa597d5207b65060db208ffac008086bbc69defadc5f03a926dcf6f"} Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.082272 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"673b865c46fd7e89a20eea7d4914f1a8b0705c80cbab1bf551e02457f36c94e2"} Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.082438 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"1928a62119645e3f546af3d440b94fd7f1303269fef4bedf571b1e0a95e69198"} Nov 24 12:06:10 crc kubenswrapper[4782]: I1124 12:06:10.082559 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"c50ca314c4238a82a03a33b52bd281f3c61f114411076be7f77f6ea85b6e125d"} Nov 24 12:06:12 crc kubenswrapper[4782]: I1124 12:06:12.097909 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"13dc1b6eba1ff5d5d3d7f45ad2ec652d9b83dc09c7e5cbc8a73cb7e822535622"} Nov 24 12:06:12 crc kubenswrapper[4782]: I1124 12:06:12.497732 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2k9fj" Nov 24 12:06:15 crc kubenswrapper[4782]: I1124 12:06:15.117297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" event={"ID":"85376c9f-1211-4f3b-b5f9-db8a371dc37c","Type":"ContainerStarted","Data":"6ecd2c8ae438f5d598e6119b0fb6b2af89ecd4a953d2c18efe2fa4e26a39d043"} Nov 24 12:06:15 crc kubenswrapper[4782]: I1124 12:06:15.117912 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:15 crc kubenswrapper[4782]: I1124 12:06:15.117952 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:15 crc kubenswrapper[4782]: I1124 12:06:15.155329 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" podStartSLOduration=7.155307547 podStartE2EDuration="7.155307547s" podCreationTimestamp="2025-11-24 12:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:06:15.141253302 +0000 UTC m=+624.385087081" watchObservedRunningTime="2025-11-24 12:06:15.155307547 +0000 UTC m=+624.399141326" Nov 24 12:06:15 crc kubenswrapper[4782]: I1124 12:06:15.159654 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:16 crc kubenswrapper[4782]: I1124 12:06:16.122698 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:16 crc kubenswrapper[4782]: I1124 12:06:16.147945 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:22 crc kubenswrapper[4782]: I1124 12:06:22.491096 4782 scope.go:117] "RemoveContainer" containerID="d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd" Nov 24 12:06:22 crc kubenswrapper[4782]: E1124 12:06:22.491856 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fp44f_openshift-multus(56de1ffb-9734-4992-b477-591dfae5ad41)\"" pod="openshift-multus/multus-fp44f" podUID="56de1ffb-9734-4992-b477-591dfae5ad41" Nov 24 12:06:36 crc kubenswrapper[4782]: I1124 12:06:36.492874 4782 scope.go:117] "RemoveContainer" containerID="d15e54d518e525fbb1abc68ed1bf4a5ba040d9a7c86aa3899fb0496edc578fcd" Nov 24 12:06:37 crc kubenswrapper[4782]: I1124 12:06:37.301995 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fp44f_56de1ffb-9734-4992-b477-591dfae5ad41/kube-multus/2.log" Nov 24 12:06:37 crc kubenswrapper[4782]: I1124 12:06:37.302446 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fp44f" event={"ID":"56de1ffb-9734-4992-b477-591dfae5ad41","Type":"ContainerStarted","Data":"9bb5b29763fd98ae98bb5e591ffe3ed79951164c8b38e77776455dc0ceb27104"} Nov 24 12:06:38 crc kubenswrapper[4782]: I1124 12:06:38.643763 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d9b5m" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.277553 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc"] Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.278994 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.282499 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.289702 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc"] Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.467112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.467263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.467309 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bmf\" (UniqueName: \"kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.568024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.568112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.568156 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bmf\" (UniqueName: \"kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.568951 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.569222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.601784 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bmf\" (UniqueName: \"kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:06:59 crc kubenswrapper[4782]: I1124 12:06:59.602175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:07:00 crc kubenswrapper[4782]: I1124 12:07:00.028347 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc"] Nov 24 12:07:00 crc kubenswrapper[4782]: W1124 12:07:00.036622 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0550e456_35df_49b1_937c_5477c7e72543.slice/crio-25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56 WatchSource:0}: Error finding container 25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56: Status 404 returned error can't find the container with id 25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56 Nov 24 12:07:00 crc kubenswrapper[4782]: I1124 12:07:00.701214 4782 generic.go:334] "Generic (PLEG): container finished" podID="0550e456-35df-49b1-937c-5477c7e72543" containerID="b38be20ed283848000a5930beb7697204277698265987ea20382a1746deb05ed" exitCode=0 Nov 24 12:07:00 crc kubenswrapper[4782]: I1124 12:07:00.701317 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" event={"ID":"0550e456-35df-49b1-937c-5477c7e72543","Type":"ContainerDied","Data":"b38be20ed283848000a5930beb7697204277698265987ea20382a1746deb05ed"} Nov 24 12:07:00 crc kubenswrapper[4782]: I1124 12:07:00.702522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" event={"ID":"0550e456-35df-49b1-937c-5477c7e72543","Type":"ContainerStarted","Data":"25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56"} Nov 24 12:07:02 crc kubenswrapper[4782]: I1124 12:07:02.711694 4782 generic.go:334] "Generic (PLEG): container finished" podID="0550e456-35df-49b1-937c-5477c7e72543" containerID="049edbd5dd60b7af06e745a9791fb811cd26a23cb7a68cbdea67c83665c3baea" exitCode=0 Nov 24 12:07:02 crc kubenswrapper[4782]: I1124 12:07:02.711924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" event={"ID":"0550e456-35df-49b1-937c-5477c7e72543","Type":"ContainerDied","Data":"049edbd5dd60b7af06e745a9791fb811cd26a23cb7a68cbdea67c83665c3baea"} Nov 24 12:07:03 crc kubenswrapper[4782]: I1124 12:07:03.725697 4782 generic.go:334] "Generic (PLEG): container finished" podID="0550e456-35df-49b1-937c-5477c7e72543" containerID="b75b42afcbd834cb6a47a904d6ac2c0e578973c975176e1512cf86a046272135" exitCode=0 Nov 24 12:07:03 crc kubenswrapper[4782]: I1124 12:07:03.725763 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" event={"ID":"0550e456-35df-49b1-937c-5477c7e72543","Type":"ContainerDied","Data":"b75b42afcbd834cb6a47a904d6ac2c0e578973c975176e1512cf86a046272135"} Nov 24 12:07:04 crc kubenswrapper[4782]: I1124 12:07:04.934713 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.036986 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util\") pod \"0550e456-35df-49b1-937c-5477c7e72543\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.037117 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bmf\" (UniqueName: \"kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf\") pod \"0550e456-35df-49b1-937c-5477c7e72543\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.037152 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle\") pod \"0550e456-35df-49b1-937c-5477c7e72543\" (UID: \"0550e456-35df-49b1-937c-5477c7e72543\") " Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.043259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle" (OuterVolumeSpecName: "bundle") pod "0550e456-35df-49b1-937c-5477c7e72543" (UID: "0550e456-35df-49b1-937c-5477c7e72543"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.043635 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf" (OuterVolumeSpecName: "kube-api-access-c9bmf") pod "0550e456-35df-49b1-937c-5477c7e72543" (UID: "0550e456-35df-49b1-937c-5477c7e72543"). InnerVolumeSpecName "kube-api-access-c9bmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.051660 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util" (OuterVolumeSpecName: "util") pod "0550e456-35df-49b1-937c-5477c7e72543" (UID: "0550e456-35df-49b1-937c-5477c7e72543"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.138154 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bmf\" (UniqueName: \"kubernetes.io/projected/0550e456-35df-49b1-937c-5477c7e72543-kube-api-access-c9bmf\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.138185 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.138196 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0550e456-35df-49b1-937c-5477c7e72543-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.739009 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" event={"ID":"0550e456-35df-49b1-937c-5477c7e72543","Type":"ContainerDied","Data":"25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56"} Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.739057 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25511fc2a56fcdfd47a7029096baaf243e4ace6dbe45ddbddf69163d2e606c56" Nov 24 12:07:05 crc kubenswrapper[4782]: I1124 12:07:05.739117 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.012138 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b4h4d"] Nov 24 12:07:08 crc kubenswrapper[4782]: E1124 12:07:08.012405 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="extract" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.012421 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="extract" Nov 24 12:07:08 crc kubenswrapper[4782]: E1124 12:07:08.012434 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="util" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.012441 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="util" Nov 24 12:07:08 crc kubenswrapper[4782]: E1124 12:07:08.012451 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="pull" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.012461 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="pull" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.012585 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0550e456-35df-49b1-937c-5477c7e72543" containerName="extract" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.013045 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.014935 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-f974l" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.015490 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.016296 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.034263 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b4h4d"] Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.175960 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/e2997dd2-a58c-48d1-b003-5e90a0df8a2d-kube-api-access-bpvz7\") pod \"nmstate-operator-557fdffb88-b4h4d\" (UID: \"e2997dd2-a58c-48d1-b003-5e90a0df8a2d\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.277875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/e2997dd2-a58c-48d1-b003-5e90a0df8a2d-kube-api-access-bpvz7\") pod \"nmstate-operator-557fdffb88-b4h4d\" (UID: \"e2997dd2-a58c-48d1-b003-5e90a0df8a2d\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.293796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/e2997dd2-a58c-48d1-b003-5e90a0df8a2d-kube-api-access-bpvz7\") pod \"nmstate-operator-557fdffb88-b4h4d\" (UID: \"e2997dd2-a58c-48d1-b003-5e90a0df8a2d\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.327739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.744248 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b4h4d"] Nov 24 12:07:08 crc kubenswrapper[4782]: I1124 12:07:08.756679 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" event={"ID":"e2997dd2-a58c-48d1-b003-5e90a0df8a2d","Type":"ContainerStarted","Data":"50a6a28830ca093157bfd16ba287d8723b446d8ba486900618309a27c83f84eb"} Nov 24 12:07:11 crc kubenswrapper[4782]: I1124 12:07:11.773929 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" event={"ID":"e2997dd2-a58c-48d1-b003-5e90a0df8a2d","Type":"ContainerStarted","Data":"34752dc94e09f7adce63b5cc521001cf5dc22635b26df305f222a7fe8f4a4dbd"} Nov 24 12:07:11 crc kubenswrapper[4782]: I1124 12:07:11.798083 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-b4h4d" podStartSLOduration=2.684290907 podStartE2EDuration="4.79806155s" podCreationTimestamp="2025-11-24 12:07:07 +0000 UTC" firstStartedPulling="2025-11-24 12:07:08.749494851 +0000 UTC m=+677.993328620" lastFinishedPulling="2025-11-24 12:07:10.863265494 +0000 UTC m=+680.107099263" observedRunningTime="2025-11-24 12:07:11.793556041 +0000 UTC m=+681.037389910" watchObservedRunningTime="2025-11-24 12:07:11.79806155 +0000 UTC m=+681.041895359" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.802603 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.803729 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.808506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kn66w" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.817393 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.818465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.820177 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.820712 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.833484 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.845760 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2fms8"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.846589 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936248 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-nmstate-lock\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936330 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-ovs-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936356 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/ec671193-a1fa-4295-8ac6-6f2df89a3687-kube-api-access-h25m4\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddq8t\" (UniqueName: \"kubernetes.io/projected/93ae4c19-bf24-48ea-96db-36a5bdd72d01-kube-api-access-ddq8t\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936469 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jsv\" (UniqueName: \"kubernetes.io/projected/7b438b30-337c-4f13-8973-2a170ccb7a2a-kube-api-access-98jsv\") pod \"nmstate-metrics-5dcf9c57c5-zmbn9\" (UID: \"7b438b30-337c-4f13-8973-2a170ccb7a2a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936531 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-dbus-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.936572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/93ae4c19-bf24-48ea-96db-36a5bdd72d01-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.975075 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk"] Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.975875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.980722 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.980791 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cx7rz" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.980753 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 12:07:12 crc kubenswrapper[4782]: I1124 12:07:12.997068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk"] Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038174 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jsv\" (UniqueName: \"kubernetes.io/projected/7b438b30-337c-4f13-8973-2a170ccb7a2a-kube-api-access-98jsv\") pod \"nmstate-metrics-5dcf9c57c5-zmbn9\" (UID: \"7b438b30-337c-4f13-8973-2a170ccb7a2a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038224 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-dbus-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038247 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e70ec59-8a74-4f10-bddd-f30177d331f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/93ae4c19-bf24-48ea-96db-36a5bdd72d01-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038316 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-nmstate-lock\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8e70ec59-8a74-4f10-bddd-f30177d331f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6ks\" (UniqueName: \"kubernetes.io/projected/8e70ec59-8a74-4f10-bddd-f30177d331f4-kube-api-access-xf6ks\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-ovs-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038420 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/ec671193-a1fa-4295-8ac6-6f2df89a3687-kube-api-access-h25m4\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038444 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddq8t\" (UniqueName: \"kubernetes.io/projected/93ae4c19-bf24-48ea-96db-36a5bdd72d01-kube-api-access-ddq8t\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038455 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-nmstate-lock\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038543 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-ovs-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.038632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec671193-a1fa-4295-8ac6-6f2df89a3687-dbus-socket\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.047406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/93ae4c19-bf24-48ea-96db-36a5bdd72d01-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.060234 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/ec671193-a1fa-4295-8ac6-6f2df89a3687-kube-api-access-h25m4\") pod \"nmstate-handler-2fms8\" (UID: \"ec671193-a1fa-4295-8ac6-6f2df89a3687\") " pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.062708 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddq8t\" (UniqueName: \"kubernetes.io/projected/93ae4c19-bf24-48ea-96db-36a5bdd72d01-kube-api-access-ddq8t\") pod \"nmstate-webhook-6b89b748d8-swb2c\" (UID: \"93ae4c19-bf24-48ea-96db-36a5bdd72d01\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.063970 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jsv\" (UniqueName: \"kubernetes.io/projected/7b438b30-337c-4f13-8973-2a170ccb7a2a-kube-api-access-98jsv\") pod \"nmstate-metrics-5dcf9c57c5-zmbn9\" (UID: \"7b438b30-337c-4f13-8973-2a170ccb7a2a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.120238 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.136813 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.139088 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e70ec59-8a74-4f10-bddd-f30177d331f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.139177 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8e70ec59-8a74-4f10-bddd-f30177d331f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.139199 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf6ks\" (UniqueName: \"kubernetes.io/projected/8e70ec59-8a74-4f10-bddd-f30177d331f4-kube-api-access-xf6ks\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.143978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8e70ec59-8a74-4f10-bddd-f30177d331f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.154047 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e70ec59-8a74-4f10-bddd-f30177d331f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.162204 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.180573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf6ks\" (UniqueName: \"kubernetes.io/projected/8e70ec59-8a74-4f10-bddd-f30177d331f4-kube-api-access-xf6ks\") pod \"nmstate-console-plugin-5874bd7bc5-w8qrk\" (UID: \"8e70ec59-8a74-4f10-bddd-f30177d331f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.228177 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5976bc9cd4-zkd99"] Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.233706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.239514 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5976bc9cd4-zkd99"] Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.298674 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.344860 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-service-ca\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345168 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345407 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vzz\" (UniqueName: \"kubernetes.io/projected/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-kube-api-access-z4vzz\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345436 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-oauth-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345511 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-trusted-ca-bundle\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.345576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-oauth-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.446998 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-service-ca\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447350 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vzz\" (UniqueName: \"kubernetes.io/projected/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-kube-api-access-z4vzz\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447448 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-oauth-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447490 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447537 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-trusted-ca-bundle\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.447605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-oauth-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.452984 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-service-ca\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.454641 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-oauth-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.455041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-trusted-ca-bundle\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.455120 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.455826 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-oauth-config\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.457075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-console-serving-cert\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.467806 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9"] Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.498666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vzz\" (UniqueName: \"kubernetes.io/projected/8c4e4be2-97fd-47b8-b7aa-a83845ba0192-kube-api-access-z4vzz\") pod \"console-5976bc9cd4-zkd99\" (UID: \"8c4e4be2-97fd-47b8-b7aa-a83845ba0192\") " pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.535715 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c"] Nov 24 12:07:13 crc kubenswrapper[4782]: W1124 12:07:13.548572 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93ae4c19_bf24_48ea_96db_36a5bdd72d01.slice/crio-cedf5afb46c32431be3e31d3bebb3af348bbd083315144c48cfebdf5a74f8139 WatchSource:0}: Error finding container cedf5afb46c32431be3e31d3bebb3af348bbd083315144c48cfebdf5a74f8139: Status 404 returned error can't find the container with id cedf5afb46c32431be3e31d3bebb3af348bbd083315144c48cfebdf5a74f8139 Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.569105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.618529 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk"] Nov 24 12:07:13 crc kubenswrapper[4782]: W1124 12:07:13.626811 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e70ec59_8a74_4f10_bddd_f30177d331f4.slice/crio-76494a801381b32ba91f6e37dd1981ee2bc500c0ff8220053902acef1f9a3327 WatchSource:0}: Error finding container 76494a801381b32ba91f6e37dd1981ee2bc500c0ff8220053902acef1f9a3327: Status 404 returned error can't find the container with id 76494a801381b32ba91f6e37dd1981ee2bc500c0ff8220053902acef1f9a3327 Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.759980 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5976bc9cd4-zkd99"] Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.784768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" event={"ID":"8e70ec59-8a74-4f10-bddd-f30177d331f4","Type":"ContainerStarted","Data":"76494a801381b32ba91f6e37dd1981ee2bc500c0ff8220053902acef1f9a3327"} Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.785896 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" event={"ID":"7b438b30-337c-4f13-8973-2a170ccb7a2a","Type":"ContainerStarted","Data":"c79b9ff4d7aac4e58367abd7574fbef74ebe6baba24a79629495f3b8c6c03cce"} Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.786977 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2fms8" event={"ID":"ec671193-a1fa-4295-8ac6-6f2df89a3687","Type":"ContainerStarted","Data":"d4d23bd926ca507deddccebde396d3a8cb83f56132431db460ad7870bcd16951"} Nov 24 12:07:13 crc kubenswrapper[4782]: I1124 12:07:13.788410 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" event={"ID":"93ae4c19-bf24-48ea-96db-36a5bdd72d01","Type":"ContainerStarted","Data":"cedf5afb46c32431be3e31d3bebb3af348bbd083315144c48cfebdf5a74f8139"} Nov 24 12:07:13 crc kubenswrapper[4782]: W1124 12:07:13.828873 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c4e4be2_97fd_47b8_b7aa_a83845ba0192.slice/crio-be0bce2a59609317ad7ef646d2cea51ed8fd1ccdc99f67e7b48cc1313a70aa7e WatchSource:0}: Error finding container be0bce2a59609317ad7ef646d2cea51ed8fd1ccdc99f67e7b48cc1313a70aa7e: Status 404 returned error can't find the container with id be0bce2a59609317ad7ef646d2cea51ed8fd1ccdc99f67e7b48cc1313a70aa7e Nov 24 12:07:14 crc kubenswrapper[4782]: I1124 12:07:14.800312 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5976bc9cd4-zkd99" event={"ID":"8c4e4be2-97fd-47b8-b7aa-a83845ba0192","Type":"ContainerStarted","Data":"cf6aa2ec5274e720158842d22c83ddd6f43a8adebc48316b9f79e34f21c52897"} Nov 24 12:07:14 crc kubenswrapper[4782]: I1124 12:07:14.800878 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5976bc9cd4-zkd99" event={"ID":"8c4e4be2-97fd-47b8-b7aa-a83845ba0192","Type":"ContainerStarted","Data":"be0bce2a59609317ad7ef646d2cea51ed8fd1ccdc99f67e7b48cc1313a70aa7e"} Nov 24 12:07:14 crc kubenswrapper[4782]: I1124 12:07:14.819720 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5976bc9cd4-zkd99" podStartSLOduration=1.819702376 podStartE2EDuration="1.819702376s" podCreationTimestamp="2025-11-24 12:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:07:14.814335334 +0000 UTC m=+684.058169103" watchObservedRunningTime="2025-11-24 12:07:14.819702376 +0000 UTC m=+684.063536155" Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.813808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2fms8" event={"ID":"ec671193-a1fa-4295-8ac6-6f2df89a3687","Type":"ContainerStarted","Data":"3cd2a0e3180f7f615a118431cd689cacba367ce111d5986afcf2cbc089f34cd3"} Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.815610 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" event={"ID":"93ae4c19-bf24-48ea-96db-36a5bdd72d01","Type":"ContainerStarted","Data":"988b12a8f00f6fc5826cb32b06051d443b84b2f87fe03b8124f2125bd3e87456"} Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.815768 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.815788 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.818555 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" event={"ID":"7b438b30-337c-4f13-8973-2a170ccb7a2a","Type":"ContainerStarted","Data":"6794aa72e57e748eef1c6ebf4d0fcb8c877545f0d585748e89e64af0a58f03ac"} Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.831750 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2fms8" podStartSLOduration=2.127714838 podStartE2EDuration="4.831732688s" podCreationTimestamp="2025-11-24 12:07:12 +0000 UTC" firstStartedPulling="2025-11-24 12:07:13.288696822 +0000 UTC m=+682.532530591" lastFinishedPulling="2025-11-24 12:07:15.992714672 +0000 UTC m=+685.236548441" observedRunningTime="2025-11-24 12:07:16.827799624 +0000 UTC m=+686.071633423" watchObservedRunningTime="2025-11-24 12:07:16.831732688 +0000 UTC m=+686.075566457" Nov 24 12:07:16 crc kubenswrapper[4782]: I1124 12:07:16.846274 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" podStartSLOduration=2.404162713 podStartE2EDuration="4.846257774s" podCreationTimestamp="2025-11-24 12:07:12 +0000 UTC" firstStartedPulling="2025-11-24 12:07:13.550461916 +0000 UTC m=+682.794295685" lastFinishedPulling="2025-11-24 12:07:15.992556977 +0000 UTC m=+685.236390746" observedRunningTime="2025-11-24 12:07:16.845232417 +0000 UTC m=+686.089066216" watchObservedRunningTime="2025-11-24 12:07:16.846257774 +0000 UTC m=+686.090091563" Nov 24 12:07:17 crc kubenswrapper[4782]: I1124 12:07:17.825823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" event={"ID":"8e70ec59-8a74-4f10-bddd-f30177d331f4","Type":"ContainerStarted","Data":"013f032d9161d80e98fcac21a4657e0a71110b589996203382e45ee826fd3dbc"} Nov 24 12:07:17 crc kubenswrapper[4782]: I1124 12:07:17.841804 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-w8qrk" podStartSLOduration=2.372519533 podStartE2EDuration="5.841787482s" podCreationTimestamp="2025-11-24 12:07:12 +0000 UTC" firstStartedPulling="2025-11-24 12:07:13.630847359 +0000 UTC m=+682.874681128" lastFinishedPulling="2025-11-24 12:07:17.100115308 +0000 UTC m=+686.343949077" observedRunningTime="2025-11-24 12:07:17.840413646 +0000 UTC m=+687.084247415" watchObservedRunningTime="2025-11-24 12:07:17.841787482 +0000 UTC m=+687.085621261" Nov 24 12:07:18 crc kubenswrapper[4782]: I1124 12:07:18.833872 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" event={"ID":"7b438b30-337c-4f13-8973-2a170ccb7a2a","Type":"ContainerStarted","Data":"e92363eec05df90f9bf08cf167bc703a80782308ac972b0bf15eb78e8d54d141"} Nov 24 12:07:18 crc kubenswrapper[4782]: I1124 12:07:18.852863 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zmbn9" podStartSLOduration=1.829904918 podStartE2EDuration="6.852844402s" podCreationTimestamp="2025-11-24 12:07:12 +0000 UTC" firstStartedPulling="2025-11-24 12:07:13.483666974 +0000 UTC m=+682.727500743" lastFinishedPulling="2025-11-24 12:07:18.506606468 +0000 UTC m=+687.750440227" observedRunningTime="2025-11-24 12:07:18.84939317 +0000 UTC m=+688.093226959" watchObservedRunningTime="2025-11-24 12:07:18.852844402 +0000 UTC m=+688.096678181" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.191894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2fms8" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.570249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.570304 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.575051 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.865267 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5976bc9cd4-zkd99" Nov 24 12:07:23 crc kubenswrapper[4782]: I1124 12:07:23.942100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 12:07:30 crc kubenswrapper[4782]: I1124 12:07:30.410928 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:07:30 crc kubenswrapper[4782]: I1124 12:07:30.411538 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:07:33 crc kubenswrapper[4782]: I1124 12:07:33.146705 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-swb2c" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.120228 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw"] Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.122572 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.127138 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.146788 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw"] Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.184018 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.184102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjkg\" (UniqueName: \"kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.184326 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.285768 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.285840 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.285888 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjkg\" (UniqueName: \"kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.286537 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.286598 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.306053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjkg\" (UniqueName: \"kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.457022 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.691639 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw"] Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.994967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerStarted","Data":"bd7342403bd39b626977fb8abf19ad7d693d8f7b5f78453b5b59e23a10e03bfa"} Nov 24 12:07:46 crc kubenswrapper[4782]: I1124 12:07:46.995322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerStarted","Data":"21d1d8e334b3b516f74fa384df3fe870b94c00cc0452bd7dc3a782766ed31e79"} Nov 24 12:07:48 crc kubenswrapper[4782]: I1124 12:07:48.003811 4782 generic.go:334] "Generic (PLEG): container finished" podID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerID="bd7342403bd39b626977fb8abf19ad7d693d8f7b5f78453b5b59e23a10e03bfa" exitCode=0 Nov 24 12:07:48 crc kubenswrapper[4782]: I1124 12:07:48.004061 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerDied","Data":"bd7342403bd39b626977fb8abf19ad7d693d8f7b5f78453b5b59e23a10e03bfa"} Nov 24 12:07:48 crc kubenswrapper[4782]: I1124 12:07:48.986240 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qs4j5" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" containerID="cri-o://cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b" gracePeriod=15 Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.623880 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qs4j5_a162cdd4-6657-40da-92f9-5f428fe8dd96/console/0.log" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.624190 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.734970 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735045 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndj55\" (UniqueName: \"kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") pod \"a162cdd4-6657-40da-92f9-5f428fe8dd96\" (UID: \"a162cdd4-6657-40da-92f9-5f428fe8dd96\") " Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.735868 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca" (OuterVolumeSpecName: "service-ca") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.736433 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.736425 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config" (OuterVolumeSpecName: "console-config") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.736642 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.736819 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.740884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.741112 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55" (OuterVolumeSpecName: "kube-api-access-ndj55") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "kube-api-access-ndj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.745263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a162cdd4-6657-40da-92f9-5f428fe8dd96" (UID: "a162cdd4-6657-40da-92f9-5f428fe8dd96"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837308 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837338 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837348 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837357 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a162cdd4-6657-40da-92f9-5f428fe8dd96-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837365 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a162cdd4-6657-40da-92f9-5f428fe8dd96-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:49 crc kubenswrapper[4782]: I1124 12:07:49.837385 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndj55\" (UniqueName: \"kubernetes.io/projected/a162cdd4-6657-40da-92f9-5f428fe8dd96-kube-api-access-ndj55\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.015710 4782 generic.go:334] "Generic (PLEG): container finished" podID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerID="f6c5890bb5e18db5c1383ed352f4d7cec4bfd1fc79e67db56c4b6b06731630b8" exitCode=0 Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.015777 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerDied","Data":"f6c5890bb5e18db5c1383ed352f4d7cec4bfd1fc79e67db56c4b6b06731630b8"} Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018763 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qs4j5_a162cdd4-6657-40da-92f9-5f428fe8dd96/console/0.log" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018799 4782 generic.go:334] "Generic (PLEG): container finished" podID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerID="cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b" exitCode=2 Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qs4j5" event={"ID":"a162cdd4-6657-40da-92f9-5f428fe8dd96","Type":"ContainerDied","Data":"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b"} Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qs4j5" event={"ID":"a162cdd4-6657-40da-92f9-5f428fe8dd96","Type":"ContainerDied","Data":"f2f8308c67a164faa0943c519912117e1d5b08bb15c8409a5617a8f938de46b3"} Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018848 4782 scope.go:117] "RemoveContainer" containerID="cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.018849 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qs4j5" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.047294 4782 scope.go:117] "RemoveContainer" containerID="cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b" Nov 24 12:07:50 crc kubenswrapper[4782]: E1124 12:07:50.048637 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b\": container with ID starting with cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b not found: ID does not exist" containerID="cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.048677 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b"} err="failed to get container status \"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b\": rpc error: code = NotFound desc = could not find container \"cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b\": container with ID starting with cd6590c0b2c26e4b519505bafc079d45ee2fbeabae555dac7df83d2907b82d3b not found: ID does not exist" Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.062804 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 12:07:50 crc kubenswrapper[4782]: I1124 12:07:50.065305 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qs4j5"] Nov 24 12:07:51 crc kubenswrapper[4782]: I1124 12:07:51.030366 4782 generic.go:334] "Generic (PLEG): container finished" podID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerID="bf92e1c78e3ba8dbba48e90791b5ad71681f8c5c62ce73a49f775faf262e4deb" exitCode=0 Nov 24 12:07:51 crc kubenswrapper[4782]: I1124 12:07:51.030671 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerDied","Data":"bf92e1c78e3ba8dbba48e90791b5ad71681f8c5c62ce73a49f775faf262e4deb"} Nov 24 12:07:51 crc kubenswrapper[4782]: I1124 12:07:51.501518 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" path="/var/lib/kubelet/pods/a162cdd4-6657-40da-92f9-5f428fe8dd96/volumes" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.260643 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.375287 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjkg\" (UniqueName: \"kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg\") pod \"41bc6902-66ca-49f1-8796-2170eb3e1e00\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.375625 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util\") pod \"41bc6902-66ca-49f1-8796-2170eb3e1e00\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.375789 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle\") pod \"41bc6902-66ca-49f1-8796-2170eb3e1e00\" (UID: \"41bc6902-66ca-49f1-8796-2170eb3e1e00\") " Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.376887 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle" (OuterVolumeSpecName: "bundle") pod "41bc6902-66ca-49f1-8796-2170eb3e1e00" (UID: "41bc6902-66ca-49f1-8796-2170eb3e1e00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.383150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg" (OuterVolumeSpecName: "kube-api-access-xdjkg") pod "41bc6902-66ca-49f1-8796-2170eb3e1e00" (UID: "41bc6902-66ca-49f1-8796-2170eb3e1e00"). InnerVolumeSpecName "kube-api-access-xdjkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.393297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util" (OuterVolumeSpecName: "util") pod "41bc6902-66ca-49f1-8796-2170eb3e1e00" (UID: "41bc6902-66ca-49f1-8796-2170eb3e1e00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.478497 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.478812 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41bc6902-66ca-49f1-8796-2170eb3e1e00-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:52 crc kubenswrapper[4782]: I1124 12:07:52.479019 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjkg\" (UniqueName: \"kubernetes.io/projected/41bc6902-66ca-49f1-8796-2170eb3e1e00-kube-api-access-xdjkg\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:53 crc kubenswrapper[4782]: I1124 12:07:53.044360 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" event={"ID":"41bc6902-66ca-49f1-8796-2170eb3e1e00","Type":"ContainerDied","Data":"21d1d8e334b3b516f74fa384df3fe870b94c00cc0452bd7dc3a782766ed31e79"} Nov 24 12:07:53 crc kubenswrapper[4782]: I1124 12:07:53.044452 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d1d8e334b3b516f74fa384df3fe870b94c00cc0452bd7dc3a782766ed31e79" Nov 24 12:07:53 crc kubenswrapper[4782]: I1124 12:07:53.044520 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw" Nov 24 12:08:00 crc kubenswrapper[4782]: I1124 12:08:00.411120 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:08:00 crc kubenswrapper[4782]: I1124 12:08:00.412721 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.107346 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7"] Nov 24 12:08:01 crc kubenswrapper[4782]: E1124 12:08:01.107915 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="util" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.107935 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="util" Nov 24 12:08:01 crc kubenswrapper[4782]: E1124 12:08:01.107948 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="extract" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.107955 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="extract" Nov 24 12:08:01 crc kubenswrapper[4782]: E1124 12:08:01.107970 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.107977 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" Nov 24 12:08:01 crc kubenswrapper[4782]: E1124 12:08:01.108000 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="pull" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.108008 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="pull" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.108126 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="41bc6902-66ca-49f1-8796-2170eb3e1e00" containerName="extract" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.108138 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a162cdd4-6657-40da-92f9-5f428fe8dd96" containerName="console" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.108586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.110514 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.110623 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.110838 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.110886 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t4pw6" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.114441 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.123195 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7"] Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.192771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-apiservice-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.192836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-webhook-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.192873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6hw\" (UniqueName: \"kubernetes.io/projected/e57d9da0-c929-401e-9311-7c2caa53e702-kube-api-access-8j6hw\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.294170 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6hw\" (UniqueName: \"kubernetes.io/projected/e57d9da0-c929-401e-9311-7c2caa53e702-kube-api-access-8j6hw\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.294249 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-apiservice-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.294290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-webhook-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.300343 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-webhook-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.303803 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e57d9da0-c929-401e-9311-7c2caa53e702-apiservice-cert\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.327998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6hw\" (UniqueName: \"kubernetes.io/projected/e57d9da0-c929-401e-9311-7c2caa53e702-kube-api-access-8j6hw\") pod \"metallb-operator-controller-manager-5cf8447d56-ls4f7\" (UID: \"e57d9da0-c929-401e-9311-7c2caa53e702\") " pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.424320 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.441051 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc"] Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.450626 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.452995 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.454048 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.454448 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rzd6z" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.469736 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc"] Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.497183 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-webhook-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.497247 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-apiservice-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.497342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln6l\" (UniqueName: \"kubernetes.io/projected/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-kube-api-access-lln6l\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.598177 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-apiservice-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.598269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lln6l\" (UniqueName: \"kubernetes.io/projected/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-kube-api-access-lln6l\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.598307 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-webhook-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.626198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-webhook-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.632088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-apiservice-cert\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.639507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lln6l\" (UniqueName: \"kubernetes.io/projected/a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661-kube-api-access-lln6l\") pod \"metallb-operator-webhook-server-8688c9769b-zmxnc\" (UID: \"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661\") " pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.758735 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7"] Nov 24 12:08:01 crc kubenswrapper[4782]: I1124 12:08:01.818164 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:02 crc kubenswrapper[4782]: I1124 12:08:02.093973 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" event={"ID":"e57d9da0-c929-401e-9311-7c2caa53e702","Type":"ContainerStarted","Data":"8e06de5899a410902ba86cd9a6436bb14765c9ee35455a12582b11ca42bc22d5"} Nov 24 12:08:02 crc kubenswrapper[4782]: I1124 12:08:02.195893 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc"] Nov 24 12:08:02 crc kubenswrapper[4782]: W1124 12:08:02.210447 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1bc3416_5d2f_48bf_b8b9_2aa77cd3e661.slice/crio-7af8a844636a599f7e47c69e0e968bb7e9f28d94a1d2b4c86bcf614aeafa46c1 WatchSource:0}: Error finding container 7af8a844636a599f7e47c69e0e968bb7e9f28d94a1d2b4c86bcf614aeafa46c1: Status 404 returned error can't find the container with id 7af8a844636a599f7e47c69e0e968bb7e9f28d94a1d2b4c86bcf614aeafa46c1 Nov 24 12:08:03 crc kubenswrapper[4782]: I1124 12:08:03.103281 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" event={"ID":"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661","Type":"ContainerStarted","Data":"7af8a844636a599f7e47c69e0e968bb7e9f28d94a1d2b4c86bcf614aeafa46c1"} Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.133835 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" event={"ID":"e57d9da0-c929-401e-9311-7c2caa53e702","Type":"ContainerStarted","Data":"ae713c5cdeb52690a2dab646e291d1426c9858249faa6a566b7e8dc3bbea1aa0"} Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.134476 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.135222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" event={"ID":"a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661","Type":"ContainerStarted","Data":"bb3286e638d5ac19ccec5a06555d8f2075b9c4a475755fbee77ac2d4cf767934"} Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.135355 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.156530 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" podStartSLOduration=1.917170446 podStartE2EDuration="8.156512077s" podCreationTimestamp="2025-11-24 12:08:01 +0000 UTC" firstStartedPulling="2025-11-24 12:08:01.764680757 +0000 UTC m=+731.008514526" lastFinishedPulling="2025-11-24 12:08:08.004022368 +0000 UTC m=+737.247856157" observedRunningTime="2025-11-24 12:08:09.152111586 +0000 UTC m=+738.395945355" watchObservedRunningTime="2025-11-24 12:08:09.156512077 +0000 UTC m=+738.400345856" Nov 24 12:08:09 crc kubenswrapper[4782]: I1124 12:08:09.174734 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" podStartSLOduration=2.365146631 podStartE2EDuration="8.174715688s" podCreationTimestamp="2025-11-24 12:08:01 +0000 UTC" firstStartedPulling="2025-11-24 12:08:02.212319702 +0000 UTC m=+731.456153471" lastFinishedPulling="2025-11-24 12:08:08.021888769 +0000 UTC m=+737.265722528" observedRunningTime="2025-11-24 12:08:09.172478646 +0000 UTC m=+738.416312425" watchObservedRunningTime="2025-11-24 12:08:09.174715688 +0000 UTC m=+738.418549457" Nov 24 12:08:18 crc kubenswrapper[4782]: I1124 12:08:18.664414 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 12:08:18 crc kubenswrapper[4782]: I1124 12:08:18.665144 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerName="controller-manager" containerID="cri-o://01c505962d6166058bc5b0d1ca1bd29912764d701c294bda95ea855a78d3320c" gracePeriod=30 Nov 24 12:08:18 crc kubenswrapper[4782]: I1124 12:08:18.688867 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 12:08:18 crc kubenswrapper[4782]: I1124 12:08:18.689095 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerName="route-controller-manager" containerID="cri-o://729309e1d37fa7f5f16b47eefcb83cfd6d0b65a4773c57a059dc0fdd3c5f14ac" gracePeriod=30 Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.190763 4782 generic.go:334] "Generic (PLEG): container finished" podID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerID="01c505962d6166058bc5b0d1ca1bd29912764d701c294bda95ea855a78d3320c" exitCode=0 Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.190990 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" event={"ID":"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3","Type":"ContainerDied","Data":"01c505962d6166058bc5b0d1ca1bd29912764d701c294bda95ea855a78d3320c"} Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.192051 4782 generic.go:334] "Generic (PLEG): container finished" podID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerID="729309e1d37fa7f5f16b47eefcb83cfd6d0b65a4773c57a059dc0fdd3c5f14ac" exitCode=0 Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.192070 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" event={"ID":"6632ed0d-58eb-4873-b45b-e2750ac2267b","Type":"ContainerDied","Data":"729309e1d37fa7f5f16b47eefcb83cfd6d0b65a4773c57a059dc0fdd3c5f14ac"} Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.192083 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" event={"ID":"6632ed0d-58eb-4873-b45b-e2750ac2267b","Type":"ContainerDied","Data":"fadc668d2c0b8c355c3050b8a54bcae890b047e0345fc5a8a6c57a96cb896015"} Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.192092 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fadc668d2c0b8c355c3050b8a54bcae890b047e0345fc5a8a6c57a96cb896015" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.214282 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.279869 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.336996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca\") pod \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337066 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vhtv\" (UniqueName: \"kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv\") pod \"6632ed0d-58eb-4873-b45b-e2750ac2267b\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert\") pod \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert\") pod \"6632ed0d-58eb-4873-b45b-e2750ac2267b\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337160 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca\") pod \"6632ed0d-58eb-4873-b45b-e2750ac2267b\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles\") pod \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337238 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6cxj\" (UniqueName: \"kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj\") pod \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config\") pod \"6632ed0d-58eb-4873-b45b-e2750ac2267b\" (UID: \"6632ed0d-58eb-4873-b45b-e2750ac2267b\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.337297 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config\") pod \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\" (UID: \"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3\") " Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.338309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config" (OuterVolumeSpecName: "config") pod "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" (UID: "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.338816 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca" (OuterVolumeSpecName: "client-ca") pod "6632ed0d-58eb-4873-b45b-e2750ac2267b" (UID: "6632ed0d-58eb-4873-b45b-e2750ac2267b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.339027 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" (UID: "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.339154 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" (UID: "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.339982 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config" (OuterVolumeSpecName: "config") pod "6632ed0d-58eb-4873-b45b-e2750ac2267b" (UID: "6632ed0d-58eb-4873-b45b-e2750ac2267b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.344168 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv" (OuterVolumeSpecName: "kube-api-access-7vhtv") pod "6632ed0d-58eb-4873-b45b-e2750ac2267b" (UID: "6632ed0d-58eb-4873-b45b-e2750ac2267b"). InnerVolumeSpecName "kube-api-access-7vhtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.345161 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" (UID: "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.347101 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6632ed0d-58eb-4873-b45b-e2750ac2267b" (UID: "6632ed0d-58eb-4873-b45b-e2750ac2267b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.350752 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj" (OuterVolumeSpecName: "kube-api-access-n6cxj") pod "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" (UID: "e3136b2c-b84b-4f2d-8be9-e34ed6f722c3"). InnerVolumeSpecName "kube-api-access-n6cxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438549 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vhtv\" (UniqueName: \"kubernetes.io/projected/6632ed0d-58eb-4873-b45b-e2750ac2267b-kube-api-access-7vhtv\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438590 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438602 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6632ed0d-58eb-4873-b45b-e2750ac2267b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438611 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438624 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438635 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6cxj\" (UniqueName: \"kubernetes.io/projected/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-kube-api-access-n6cxj\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438645 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6632ed0d-58eb-4873-b45b-e2750ac2267b-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438669 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.438678 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.833286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:19 crc kubenswrapper[4782]: E1124 12:08:19.833541 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerName="controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.833557 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerName="controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: E1124 12:08:19.833570 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerName="route-controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.833578 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerName="route-controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.833724 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" containerName="controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.833738 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" containerName="route-controller-manager" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.834156 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.847465 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.957977 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.958025 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.958131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:19 crc kubenswrapper[4782]: I1124 12:08:19.958149 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbb2q\" (UniqueName: \"kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.059130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.059178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.059278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.059299 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbb2q\" (UniqueName: \"kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.060498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.062018 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.071756 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.083882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbb2q\" (UniqueName: \"kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q\") pod \"route-controller-manager-7b5b96b6dd-wps2l\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.148231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.207347 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.207969 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.208360 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8xv9n" event={"ID":"e3136b2c-b84b-4f2d-8be9-e34ed6f722c3","Type":"ContainerDied","Data":"653139f4b7d5b0024e4799b20187c6da54dfc5ddadd95a8e67583a7fec95610a"} Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.208418 4782 scope.go:117] "RemoveContainer" containerID="01c505962d6166058bc5b0d1ca1bd29912764d701c294bda95ea855a78d3320c" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.245714 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.256223 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8xv9n"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.263480 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.268074 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8p62"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.575064 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:20 crc kubenswrapper[4782]: W1124 12:08:20.582123 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e74678_4991_4d4e_83aa_5004a7eb9b8e.slice/crio-2e04c9a32aa37fd762149bf3fd214b270091cc4e23f30f771ce725ad5d4cacfc WatchSource:0}: Error finding container 2e04c9a32aa37fd762149bf3fd214b270091cc4e23f30f771ce725ad5d4cacfc: Status 404 returned error can't find the container with id 2e04c9a32aa37fd762149bf3fd214b270091cc4e23f30f771ce725ad5d4cacfc Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.735949 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.836136 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cbf559779-bwsxl"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.838410 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.842536 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.842582 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.842650 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.842742 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.843090 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.843254 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.852859 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cbf559779-bwsxl"] Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.858462 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.980355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d357300e-e000-4f7b-9580-e6594c53dc90-serving-cert\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.980486 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-proxy-ca-bundles\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.980519 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7mq\" (UniqueName: \"kubernetes.io/projected/d357300e-e000-4f7b-9580-e6594c53dc90-kube-api-access-kf7mq\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.980540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-config\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:20 crc kubenswrapper[4782]: I1124 12:08:20.980586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-client-ca\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.081454 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d357300e-e000-4f7b-9580-e6594c53dc90-serving-cert\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.081747 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-proxy-ca-bundles\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.083159 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7mq\" (UniqueName: \"kubernetes.io/projected/d357300e-e000-4f7b-9580-e6594c53dc90-kube-api-access-kf7mq\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.083667 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-config\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.083097 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-proxy-ca-bundles\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.084899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-config\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.085158 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-client-ca\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.085865 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d357300e-e000-4f7b-9580-e6594c53dc90-client-ca\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.087462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d357300e-e000-4f7b-9580-e6594c53dc90-serving-cert\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.104282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7mq\" (UniqueName: \"kubernetes.io/projected/d357300e-e000-4f7b-9580-e6594c53dc90-kube-api-access-kf7mq\") pod \"controller-manager-cbf559779-bwsxl\" (UID: \"d357300e-e000-4f7b-9580-e6594c53dc90\") " pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.190136 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.223004 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" event={"ID":"89e74678-4991-4d4e-83aa-5004a7eb9b8e","Type":"ContainerStarted","Data":"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e"} Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.223043 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" event={"ID":"89e74678-4991-4d4e-83aa-5004a7eb9b8e","Type":"ContainerStarted","Data":"2e04c9a32aa37fd762149bf3fd214b270091cc4e23f30f771ce725ad5d4cacfc"} Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.223143 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerName="route-controller-manager" containerID="cri-o://81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e" gracePeriod=30 Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.223714 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.244723 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" podStartSLOduration=3.244708793 podStartE2EDuration="3.244708793s" podCreationTimestamp="2025-11-24 12:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:08:21.244441276 +0000 UTC m=+750.488275055" watchObservedRunningTime="2025-11-24 12:08:21.244708793 +0000 UTC m=+750.488542562" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.488021 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cbf559779-bwsxl"] Nov 24 12:08:21 crc kubenswrapper[4782]: W1124 12:08:21.497264 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd357300e_e000_4f7b_9580_e6594c53dc90.slice/crio-a2afdc699d756a59f39b3719bac2200f9f5db979524bf746b5392f525578e5ca WatchSource:0}: Error finding container a2afdc699d756a59f39b3719bac2200f9f5db979524bf746b5392f525578e5ca: Status 404 returned error can't find the container with id a2afdc699d756a59f39b3719bac2200f9f5db979524bf746b5392f525578e5ca Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.502181 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6632ed0d-58eb-4873-b45b-e2750ac2267b" path="/var/lib/kubelet/pods/6632ed0d-58eb-4873-b45b-e2750ac2267b/volumes" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.503010 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3136b2c-b84b-4f2d-8be9-e34ed6f722c3" path="/var/lib/kubelet/pods/e3136b2c-b84b-4f2d-8be9-e34ed6f722c3/volumes" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.573415 4782 patch_prober.go:28] interesting pod/route-controller-manager-7b5b96b6dd-wps2l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": read tcp 10.217.0.2:50008->10.217.0.49:8443: read: connection reset by peer" start-of-body= Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.573461 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": read tcp 10.217.0.2:50008->10.217.0.49:8443: read: connection reset by peer" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.829646 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8688c9769b-zmxnc" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.903768 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7b5b96b6dd-wps2l_89e74678-4991-4d4e-83aa-5004a7eb9b8e/route-controller-manager/0.log" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.903828 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.942794 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc"] Nov 24 12:08:21 crc kubenswrapper[4782]: E1124 12:08:21.943031 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerName="route-controller-manager" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.943051 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerName="route-controller-manager" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.943208 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerName="route-controller-manager" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.943727 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:21 crc kubenswrapper[4782]: I1124 12:08:21.974220 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc"] Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108149 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config\") pod \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108212 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert\") pod \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca\") pod \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108360 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbb2q\" (UniqueName: \"kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q\") pod \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\" (UID: \"89e74678-4991-4d4e-83aa-5004a7eb9b8e\") " Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108602 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cf0d85-92ea-4c0e-8f2b-112099971a39-serving-cert\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108655 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-config\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgh4\" (UniqueName: \"kubernetes.io/projected/58cf0d85-92ea-4c0e-8f2b-112099971a39-kube-api-access-9zgh4\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.108720 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-client-ca\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.109416 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "89e74678-4991-4d4e-83aa-5004a7eb9b8e" (UID: "89e74678-4991-4d4e-83aa-5004a7eb9b8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.109436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config" (OuterVolumeSpecName: "config") pod "89e74678-4991-4d4e-83aa-5004a7eb9b8e" (UID: "89e74678-4991-4d4e-83aa-5004a7eb9b8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.116505 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89e74678-4991-4d4e-83aa-5004a7eb9b8e" (UID: "89e74678-4991-4d4e-83aa-5004a7eb9b8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.131565 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q" (OuterVolumeSpecName: "kube-api-access-vbb2q") pod "89e74678-4991-4d4e-83aa-5004a7eb9b8e" (UID: "89e74678-4991-4d4e-83aa-5004a7eb9b8e"). InnerVolumeSpecName "kube-api-access-vbb2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.209877 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cf0d85-92ea-4c0e-8f2b-112099971a39-serving-cert\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.209943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-config\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.209970 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgh4\" (UniqueName: \"kubernetes.io/projected/58cf0d85-92ea-4c0e-8f2b-112099971a39-kube-api-access-9zgh4\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.209993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-client-ca\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.210081 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbb2q\" (UniqueName: \"kubernetes.io/projected/89e74678-4991-4d4e-83aa-5004a7eb9b8e-kube-api-access-vbb2q\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.210098 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.210109 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e74678-4991-4d4e-83aa-5004a7eb9b8e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.210121 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89e74678-4991-4d4e-83aa-5004a7eb9b8e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.211137 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-client-ca\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.212570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cf0d85-92ea-4c0e-8f2b-112099971a39-config\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.214146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cf0d85-92ea-4c0e-8f2b-112099971a39-serving-cert\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.235812 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7b5b96b6dd-wps2l_89e74678-4991-4d4e-83aa-5004a7eb9b8e/route-controller-manager/0.log" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.235880 4782 generic.go:334] "Generic (PLEG): container finished" podID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" containerID="81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e" exitCode=255 Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.235942 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" event={"ID":"89e74678-4991-4d4e-83aa-5004a7eb9b8e","Type":"ContainerDied","Data":"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e"} Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.235974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" event={"ID":"89e74678-4991-4d4e-83aa-5004a7eb9b8e","Type":"ContainerDied","Data":"2e04c9a32aa37fd762149bf3fd214b270091cc4e23f30f771ce725ad5d4cacfc"} Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.235994 4782 scope.go:117] "RemoveContainer" containerID="81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.236123 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.241652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" event={"ID":"d357300e-e000-4f7b-9580-e6594c53dc90","Type":"ContainerStarted","Data":"a0f3899053590d22a01aba2c737ec0126cbb8ffd2d449cdb1f9a0c893fa3fa21"} Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.241699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" event={"ID":"d357300e-e000-4f7b-9580-e6594c53dc90","Type":"ContainerStarted","Data":"a2afdc699d756a59f39b3719bac2200f9f5db979524bf746b5392f525578e5ca"} Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.242507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgh4\" (UniqueName: \"kubernetes.io/projected/58cf0d85-92ea-4c0e-8f2b-112099971a39-kube-api-access-9zgh4\") pod \"route-controller-manager-7cbffcff7d-zbckc\" (UID: \"58cf0d85-92ea-4c0e-8f2b-112099971a39\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.242552 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.248710 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.260699 4782 scope.go:117] "RemoveContainer" containerID="81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e" Nov 24 12:08:22 crc kubenswrapper[4782]: E1124 12:08:22.261224 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e\": container with ID starting with 81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e not found: ID does not exist" containerID="81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.261315 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e"} err="failed to get container status \"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e\": rpc error: code = NotFound desc = could not find container \"81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e\": container with ID starting with 81c865c6376ab44c680e14ddd0638f540cc5b6b49c50180ef174aa5a4604378e not found: ID does not exist" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.264751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.269801 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cbf559779-bwsxl" podStartSLOduration=4.269785291 podStartE2EDuration="4.269785291s" podCreationTimestamp="2025-11-24 12:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:08:22.268967199 +0000 UTC m=+751.512800968" watchObservedRunningTime="2025-11-24 12:08:22.269785291 +0000 UTC m=+751.513619050" Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.317256 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.333557 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5b96b6dd-wps2l"] Nov 24 12:08:22 crc kubenswrapper[4782]: I1124 12:08:22.703736 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc"] Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.248672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" event={"ID":"58cf0d85-92ea-4c0e-8f2b-112099971a39","Type":"ContainerStarted","Data":"2cf8438e56ad17a9938601c5d790b5e9e8ee90fc1e29886b8e43a949dea9ff8e"} Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.249033 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" event={"ID":"58cf0d85-92ea-4c0e-8f2b-112099971a39","Type":"ContainerStarted","Data":"dbaccc1f5b58a2d222c0e72e20477ec87025b837e36d6844ea1d60a40a21444a"} Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.249357 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.276164 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" podStartSLOduration=3.276144613 podStartE2EDuration="3.276144613s" podCreationTimestamp="2025-11-24 12:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:08:23.274323253 +0000 UTC m=+752.518157042" watchObservedRunningTime="2025-11-24 12:08:23.276144613 +0000 UTC m=+752.519978392" Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.499262 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e74678-4991-4d4e-83aa-5004a7eb9b8e" path="/var/lib/kubelet/pods/89e74678-4991-4d4e-83aa-5004a7eb9b8e/volumes" Nov 24 12:08:23 crc kubenswrapper[4782]: I1124 12:08:23.622145 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbffcff7d-zbckc" Nov 24 12:08:27 crc kubenswrapper[4782]: I1124 12:08:27.536859 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 12:08:30 crc kubenswrapper[4782]: I1124 12:08:30.410101 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:08:30 crc kubenswrapper[4782]: I1124 12:08:30.410495 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:08:30 crc kubenswrapper[4782]: I1124 12:08:30.410544 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:08:30 crc kubenswrapper[4782]: I1124 12:08:30.410986 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:08:30 crc kubenswrapper[4782]: I1124 12:08:30.411028 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0" gracePeriod=600 Nov 24 12:08:31 crc kubenswrapper[4782]: I1124 12:08:31.297502 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0" exitCode=0 Nov 24 12:08:31 crc kubenswrapper[4782]: I1124 12:08:31.297556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0"} Nov 24 12:08:31 crc kubenswrapper[4782]: I1124 12:08:31.297819 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a"} Nov 24 12:08:31 crc kubenswrapper[4782]: I1124 12:08:31.297839 4782 scope.go:117] "RemoveContainer" containerID="fc5f1f7d75817a2e9e41c767286c65b537bdd7736d3a013c3d25b0320c190922" Nov 24 12:08:41 crc kubenswrapper[4782]: I1124 12:08:41.427372 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cf8447d56-ls4f7" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.321945 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bk5fw"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.324363 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.331328 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.332569 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2r4k6" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.333163 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.341763 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-krmz8"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.342596 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.355612 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-krmz8"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356751 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-metrics\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356793 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356826 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356843 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-conf\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/680910b6-d069-4019-8024-f483987e8347-frr-startup\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356894 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-reloader\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4m7f\" (UniqueName: \"kubernetes.io/projected/680910b6-d069-4019-8024-f483987e8347-kube-api-access-p4m7f\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-sockets\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.356973 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xhp\" (UniqueName: \"kubernetes.io/projected/d4164755-f714-472a-9c05-c9978612bce6-kube-api-access-z4xhp\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.360887 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458164 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-conf\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/680910b6-d069-4019-8024-f483987e8347-frr-startup\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458237 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-reloader\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4m7f\" (UniqueName: \"kubernetes.io/projected/680910b6-d069-4019-8024-f483987e8347-kube-api-access-p4m7f\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-sockets\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458331 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xhp\" (UniqueName: \"kubernetes.io/projected/d4164755-f714-472a-9c05-c9978612bce6-kube-api-access-z4xhp\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458359 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-metrics\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.458386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.458543 4782 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.458590 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs podName:680910b6-d069-4019-8024-f483987e8347 nodeName:}" failed. No retries permitted until 2025-11-24 12:08:42.958574287 +0000 UTC m=+772.202408056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs") pod "frr-k8s-bk5fw" (UID: "680910b6-d069-4019-8024-f483987e8347") : secret "frr-k8s-certs-secret" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.459124 4782 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.459161 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert podName:d4164755-f714-472a-9c05-c9978612bce6 nodeName:}" failed. No retries permitted until 2025-11-24 12:08:42.959152853 +0000 UTC m=+772.202986622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert") pod "frr-k8s-webhook-server-6998585d5-krmz8" (UID: "d4164755-f714-472a-9c05-c9978612bce6") : secret "frr-k8s-webhook-server-cert" not found Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.459628 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-conf\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.460445 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/680910b6-d069-4019-8024-f483987e8347-frr-startup\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.460621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-reloader\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.460996 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-frr-sockets\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.461319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/680910b6-d069-4019-8024-f483987e8347-metrics\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.480027 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xrpzs"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.480947 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.485209 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4m7f\" (UniqueName: \"kubernetes.io/projected/680910b6-d069-4019-8024-f483987e8347-kube-api-access-p4m7f\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.489870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.490057 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.492444 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.492573 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ss5xk" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.495288 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-7tjql"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.502174 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.504222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.505107 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xhp\" (UniqueName: \"kubernetes.io/projected/d4164755-f714-472a-9c05-c9978612bce6-kube-api-access-z4xhp\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.511099 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-7tjql"] Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.659989 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772hh\" (UniqueName: \"kubernetes.io/projected/cbc9434a-55e0-497d-9658-7531208c412e-kube-api-access-772hh\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-metrics-certs\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-cert\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfd7w\" (UniqueName: \"kubernetes.io/projected/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-kube-api-access-jfd7w\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.660201 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbc9434a-55e0-497d-9658-7531208c412e-metallb-excludel2\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761217 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-metrics-certs\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-cert\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761334 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfd7w\" (UniqueName: \"kubernetes.io/projected/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-kube-api-access-jfd7w\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbc9434a-55e0-497d-9658-7531208c412e-metallb-excludel2\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.761425 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772hh\" (UniqueName: \"kubernetes.io/projected/cbc9434a-55e0-497d-9658-7531208c412e-kube-api-access-772hh\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.761812 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.761831 4782 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.761860 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist podName:cbc9434a-55e0-497d-9658-7531208c412e nodeName:}" failed. No retries permitted until 2025-11-24 12:08:43.261848163 +0000 UTC m=+772.505681932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist") pod "speaker-xrpzs" (UID: "cbc9434a-55e0-497d-9658-7531208c412e") : secret "metallb-memberlist" not found Nov 24 12:08:42 crc kubenswrapper[4782]: E1124 12:08:42.761875 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs podName:9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95 nodeName:}" failed. No retries permitted until 2025-11-24 12:08:43.261869494 +0000 UTC m=+772.505703263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs") pod "controller-6c7b4b5f48-7tjql" (UID: "9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95") : secret "controller-certs-secret" not found Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.762585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbc9434a-55e0-497d-9658-7531208c412e-metallb-excludel2\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.766089 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-metrics-certs\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.766437 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-cert\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.786369 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfd7w\" (UniqueName: \"kubernetes.io/projected/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-kube-api-access-jfd7w\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.797880 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772hh\" (UniqueName: \"kubernetes.io/projected/cbc9434a-55e0-497d-9658-7531208c412e-kube-api-access-772hh\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.963188 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.963239 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.966328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/680910b6-d069-4019-8024-f483987e8347-metrics-certs\") pod \"frr-k8s-bk5fw\" (UID: \"680910b6-d069-4019-8024-f483987e8347\") " pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:42 crc kubenswrapper[4782]: I1124 12:08:42.966377 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4164755-f714-472a-9c05-c9978612bce6-cert\") pod \"frr-k8s-webhook-server-6998585d5-krmz8\" (UID: \"d4164755-f714-472a-9c05-c9978612bce6\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.246176 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.262331 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.267296 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.267438 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:43 crc kubenswrapper[4782]: E1124 12:08:43.267534 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:08:43 crc kubenswrapper[4782]: E1124 12:08:43.267617 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist podName:cbc9434a-55e0-497d-9658-7531208c412e nodeName:}" failed. No retries permitted until 2025-11-24 12:08:44.267596955 +0000 UTC m=+773.511430794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist") pod "speaker-xrpzs" (UID: "cbc9434a-55e0-497d-9658-7531208c412e") : secret "metallb-memberlist" not found Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.272054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95-metrics-certs\") pod \"controller-6c7b4b5f48-7tjql\" (UID: \"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95\") " pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.451815 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.691640 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-krmz8"] Nov 24 12:08:43 crc kubenswrapper[4782]: I1124 12:08:43.853278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-7tjql"] Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.282429 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.287574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbc9434a-55e0-497d-9658-7531208c412e-memberlist\") pod \"speaker-xrpzs\" (UID: \"cbc9434a-55e0-497d-9658-7531208c412e\") " pod="metallb-system/speaker-xrpzs" Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.343315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xrpzs" Nov 24 12:08:44 crc kubenswrapper[4782]: W1124 12:08:44.362038 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbc9434a_55e0_497d_9658_7531208c412e.slice/crio-851a69b880ed6160962a8b0c930dcde248f61c859b4dabb6bd592b8b752c27f3 WatchSource:0}: Error finding container 851a69b880ed6160962a8b0c930dcde248f61c859b4dabb6bd592b8b752c27f3: Status 404 returned error can't find the container with id 851a69b880ed6160962a8b0c930dcde248f61c859b4dabb6bd592b8b752c27f3 Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.379407 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xrpzs" event={"ID":"cbc9434a-55e0-497d-9658-7531208c412e","Type":"ContainerStarted","Data":"851a69b880ed6160962a8b0c930dcde248f61c859b4dabb6bd592b8b752c27f3"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.381071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"fa11b1338d7ba24f7e1bad51d63e8dfcf8989c3d7b885e99ba7ec321b46efb5d"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.384002 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7tjql" event={"ID":"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95","Type":"ContainerStarted","Data":"34fc0482a43e6ecb601b349442a927e3ef39d66f5df9e0a390c69c80415ef564"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.384132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7tjql" event={"ID":"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95","Type":"ContainerStarted","Data":"d49865a7e557c4d5717b74ecda3639b6b55cd90b96219973c8d091683cc682cd"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.384210 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7tjql" event={"ID":"9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95","Type":"ContainerStarted","Data":"e3a80c748e48e51479f74f76797b98eacf6d8fc8ecfa944943c0a20913c8eb8c"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.385142 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.386185 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" event={"ID":"d4164755-f714-472a-9c05-c9978612bce6","Type":"ContainerStarted","Data":"52c4997daf4591d7ced03efc721a6b556c934a8095d376348985b633b9467a63"} Nov 24 12:08:44 crc kubenswrapper[4782]: I1124 12:08:44.419144 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-7tjql" podStartSLOduration=2.4191257999999998 podStartE2EDuration="2.4191258s" podCreationTimestamp="2025-11-24 12:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:08:44.416422095 +0000 UTC m=+773.660255874" watchObservedRunningTime="2025-11-24 12:08:44.4191258 +0000 UTC m=+773.662959569" Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.404900 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xrpzs" event={"ID":"cbc9434a-55e0-497d-9658-7531208c412e","Type":"ContainerStarted","Data":"81ba1f9a7faf19d89218b70a72096df1230aebafc9a24073069a2789c3efde61"} Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.405197 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xrpzs" event={"ID":"cbc9434a-55e0-497d-9658-7531208c412e","Type":"ContainerStarted","Data":"64320e481e9b772bae3538c3296d74e297c21027b1b2c33608fb911b8de0c020"} Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.428751 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xrpzs" podStartSLOduration=3.428735481 podStartE2EDuration="3.428735481s" podCreationTimestamp="2025-11-24 12:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:08:45.424116214 +0000 UTC m=+774.667949983" watchObservedRunningTime="2025-11-24 12:08:45.428735481 +0000 UTC m=+774.672569250" Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.793450 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.795179 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.818374 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.912174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4k2\" (UniqueName: \"kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.912240 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:45 crc kubenswrapper[4782]: I1124 12:08:45.912262 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.013505 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4k2\" (UniqueName: \"kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.013621 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.013642 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.014064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.014544 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.059240 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4k2\" (UniqueName: \"kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2\") pod \"community-operators-7fnr7\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.111317 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.414167 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xrpzs" Nov 24 12:08:46 crc kubenswrapper[4782]: I1124 12:08:46.804757 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:08:46 crc kubenswrapper[4782]: W1124 12:08:46.839981 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056f27e0_7e61_451e_8f1c_3da5aea2f91e.slice/crio-84940a821a875f2af96f127dc8c3e8a5117b17c9d9c97d54684d76f8d44f62ce WatchSource:0}: Error finding container 84940a821a875f2af96f127dc8c3e8a5117b17c9d9c97d54684d76f8d44f62ce: Status 404 returned error can't find the container with id 84940a821a875f2af96f127dc8c3e8a5117b17c9d9c97d54684d76f8d44f62ce Nov 24 12:08:47 crc kubenswrapper[4782]: I1124 12:08:47.420425 4782 generic.go:334] "Generic (PLEG): container finished" podID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerID="8b8dccb1af8c93f2388db9b07db491e16eef2e5f6afa08097f6412f67387e936" exitCode=0 Nov 24 12:08:47 crc kubenswrapper[4782]: I1124 12:08:47.421870 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerDied","Data":"8b8dccb1af8c93f2388db9b07db491e16eef2e5f6afa08097f6412f67387e936"} Nov 24 12:08:47 crc kubenswrapper[4782]: I1124 12:08:47.421903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerStarted","Data":"84940a821a875f2af96f127dc8c3e8a5117b17c9d9c97d54684d76f8d44f62ce"} Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.352658 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.366784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.373892 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.463349 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.463423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.463493 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6zw\" (UniqueName: \"kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.565030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq6zw\" (UniqueName: \"kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.565106 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.565135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.565847 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.565877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.589338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq6zw\" (UniqueName: \"kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw\") pod \"redhat-marketplace-tr6jg\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:48 crc kubenswrapper[4782]: I1124 12:08:48.683350 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:49 crc kubenswrapper[4782]: I1124 12:08:49.147934 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:08:49 crc kubenswrapper[4782]: I1124 12:08:49.445040 4782 generic.go:334] "Generic (PLEG): container finished" podID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerID="17e23e76a68c7463fa66d8e07cb8c261fd079c75f9a72507ca65490faa48bd81" exitCode=0 Nov 24 12:08:49 crc kubenswrapper[4782]: I1124 12:08:49.445083 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerDied","Data":"17e23e76a68c7463fa66d8e07cb8c261fd079c75f9a72507ca65490faa48bd81"} Nov 24 12:08:51 crc kubenswrapper[4782]: I1124 12:08:51.836954 4782 scope.go:117] "RemoveContainer" containerID="729309e1d37fa7f5f16b47eefcb83cfd6d0b65a4773c57a059dc0fdd3c5f14ac" Nov 24 12:08:52 crc kubenswrapper[4782]: W1124 12:08:52.352871 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda541b7c2_440d_4596_9106_04be15c9c714.slice/crio-6935de890d8c756ad4449453038ca1386203554908a26d14b5bb0b21ea74fcaa WatchSource:0}: Error finding container 6935de890d8c756ad4449453038ca1386203554908a26d14b5bb0b21ea74fcaa: Status 404 returned error can't find the container with id 6935de890d8c756ad4449453038ca1386203554908a26d14b5bb0b21ea74fcaa Nov 24 12:08:52 crc kubenswrapper[4782]: I1124 12:08:52.470017 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerStarted","Data":"6935de890d8c756ad4449453038ca1386203554908a26d14b5bb0b21ea74fcaa"} Nov 24 12:08:53 crc kubenswrapper[4782]: I1124 12:08:53.456675 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-7tjql" Nov 24 12:08:53 crc kubenswrapper[4782]: I1124 12:08:53.479365 4782 generic.go:334] "Generic (PLEG): container finished" podID="a541b7c2-440d-4596-9106-04be15c9c714" containerID="e298cc081fd4c168eb7e63ebd7dff120ee0b5d8d01324db2c7b5bb8b936e0889" exitCode=0 Nov 24 12:08:53 crc kubenswrapper[4782]: I1124 12:08:53.479426 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerDied","Data":"e298cc081fd4c168eb7e63ebd7dff120ee0b5d8d01324db2c7b5bb8b936e0889"} Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.347844 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xrpzs" Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.487007 4782 generic.go:334] "Generic (PLEG): container finished" podID="a541b7c2-440d-4596-9106-04be15c9c714" containerID="a60005f2cbd4042e764f3901a5c12e422a3a1a50df67a796f4e945c8a962e629" exitCode=0 Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.487068 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerDied","Data":"a60005f2cbd4042e764f3901a5c12e422a3a1a50df67a796f4e945c8a962e629"} Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.489994 4782 generic.go:334] "Generic (PLEG): container finished" podID="680910b6-d069-4019-8024-f483987e8347" containerID="fe367cb29b94a46720198ca6a5d7f149c27714a905db3183b33a8c065722154e" exitCode=0 Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.490037 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerDied","Data":"fe367cb29b94a46720198ca6a5d7f149c27714a905db3183b33a8c065722154e"} Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.494144 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerStarted","Data":"232faff9a40296cdbafe40e87e853b1eebc5815fa8ca5509a0b095ab56357d02"} Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.497826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" event={"ID":"d4164755-f714-472a-9c05-c9978612bce6","Type":"ContainerStarted","Data":"e3b0b7b450dcecf1e1a15e25b0eaafbe17c254d09f17fc4af03c33d5c89e1d09"} Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.498052 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.546031 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7fnr7" podStartSLOduration=3.6153301669999998 podStartE2EDuration="9.546010724s" podCreationTimestamp="2025-11-24 12:08:45 +0000 UTC" firstStartedPulling="2025-11-24 12:08:47.423779793 +0000 UTC m=+776.667613582" lastFinishedPulling="2025-11-24 12:08:53.35446038 +0000 UTC m=+782.598294139" observedRunningTime="2025-11-24 12:08:54.534123185 +0000 UTC m=+783.777956954" watchObservedRunningTime="2025-11-24 12:08:54.546010724 +0000 UTC m=+783.789844493" Nov 24 12:08:54 crc kubenswrapper[4782]: I1124 12:08:54.599175 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" podStartSLOduration=2.938733262 podStartE2EDuration="12.599159442s" podCreationTimestamp="2025-11-24 12:08:42 +0000 UTC" firstStartedPulling="2025-11-24 12:08:43.699295672 +0000 UTC m=+772.943129431" lastFinishedPulling="2025-11-24 12:08:53.359721842 +0000 UTC m=+782.603555611" observedRunningTime="2025-11-24 12:08:54.597567612 +0000 UTC m=+783.841401391" watchObservedRunningTime="2025-11-24 12:08:54.599159442 +0000 UTC m=+783.842993211" Nov 24 12:08:55 crc kubenswrapper[4782]: I1124 12:08:55.504926 4782 generic.go:334] "Generic (PLEG): container finished" podID="680910b6-d069-4019-8024-f483987e8347" containerID="2e363d6698153d85bf74a4e668ca689ef501ccbcfabd7a9ad8121fb7153e5b7e" exitCode=0 Nov 24 12:08:55 crc kubenswrapper[4782]: I1124 12:08:55.505016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerDied","Data":"2e363d6698153d85bf74a4e668ca689ef501ccbcfabd7a9ad8121fb7153e5b7e"} Nov 24 12:08:55 crc kubenswrapper[4782]: I1124 12:08:55.507897 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerStarted","Data":"fca49b4942a3f36ba809111b16e1eaca0e8de4ad334dd73755721276bbe90c2a"} Nov 24 12:08:55 crc kubenswrapper[4782]: I1124 12:08:55.557705 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tr6jg" podStartSLOduration=6.087228031 podStartE2EDuration="7.557684428s" podCreationTimestamp="2025-11-24 12:08:48 +0000 UTC" firstStartedPulling="2025-11-24 12:08:53.486116755 +0000 UTC m=+782.729950524" lastFinishedPulling="2025-11-24 12:08:54.956573152 +0000 UTC m=+784.200406921" observedRunningTime="2025-11-24 12:08:55.555865762 +0000 UTC m=+784.799699541" watchObservedRunningTime="2025-11-24 12:08:55.557684428 +0000 UTC m=+784.801518197" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.111536 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.111868 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.151161 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.153196 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.163844 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.169718 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ps7\" (UniqueName: \"kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.169771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.169849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.170785 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.271133 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4ps7\" (UniqueName: \"kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.271211 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.271283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.271830 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.271866 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.296246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4ps7\" (UniqueName: \"kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7\") pod \"redhat-operators-7mzqz\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.467127 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.514666 4782 generic.go:334] "Generic (PLEG): container finished" podID="680910b6-d069-4019-8024-f483987e8347" containerID="2fd84bd905abac41b9e991329f96303b01b194dba02f7be7657b4e803fa6db8a" exitCode=0 Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.515616 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerDied","Data":"2fd84bd905abac41b9e991329f96303b01b194dba02f7be7657b4e803fa6db8a"} Nov 24 12:08:56 crc kubenswrapper[4782]: I1124 12:08:56.965017 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:08:56 crc kubenswrapper[4782]: W1124 12:08:56.975654 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29d3c176_5c9e_4191_b8fa_cb8433ebca53.slice/crio-41c238646b26fbe3c8d56df13f0701c418f3b40f841cfb54888efe176e4bc9b0 WatchSource:0}: Error finding container 41c238646b26fbe3c8d56df13f0701c418f3b40f841cfb54888efe176e4bc9b0: Status 404 returned error can't find the container with id 41c238646b26fbe3c8d56df13f0701c418f3b40f841cfb54888efe176e4bc9b0 Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.528058 4782 generic.go:334] "Generic (PLEG): container finished" podID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerID="c39be4a505f522ca47107047a7eaf3af5e6def88c4c8009be2e62305af67166f" exitCode=0 Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.529972 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerDied","Data":"c39be4a505f522ca47107047a7eaf3af5e6def88c4c8009be2e62305af67166f"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.538585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerStarted","Data":"41c238646b26fbe3c8d56df13f0701c418f3b40f841cfb54888efe176e4bc9b0"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.543448 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"53198e90e2ebb573fa63966a4956f024240f5bf110f9fcb556b215fd9fa47377"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.547654 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"3f6a95903dbc39e854eaa8d9c86d1dbc99a343ecde1d47163408ab9bfd5db36a"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.551508 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"5012835e7eef40290cfd893ff788644c0aed72d0fc05fa64b589a93dd2260646"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.551547 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"0547864e2270898a00fe2f6820dedf5413500047dd8930f6e54c798aecfd575f"} Nov 24 12:08:57 crc kubenswrapper[4782]: I1124 12:08:57.551561 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"929a58789e15e9b3dd9a2c6ff1fecbaa5dbc45a570ddf8f3555b553e0bc524a5"} Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.553226 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerStarted","Data":"c5ad6480788f34a0eec6439f1b0e56507f749eae9f49d0df204ddb794b6d9492"} Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.559562 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bk5fw" event={"ID":"680910b6-d069-4019-8024-f483987e8347","Type":"ContainerStarted","Data":"d6662361b51c5a90e3af0d5538ba3d3bbd93995e678ac8124ee45bfddbd9b962"} Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.560061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.603777 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bk5fw" podStartSLOduration=6.621181978 podStartE2EDuration="16.603755941s" podCreationTimestamp="2025-11-24 12:08:42 +0000 UTC" firstStartedPulling="2025-11-24 12:08:43.404523769 +0000 UTC m=+772.648357538" lastFinishedPulling="2025-11-24 12:08:53.387097722 +0000 UTC m=+782.630931501" observedRunningTime="2025-11-24 12:08:58.598329035 +0000 UTC m=+787.842162814" watchObservedRunningTime="2025-11-24 12:08:58.603755941 +0000 UTC m=+787.847589710" Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.683687 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.683754 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:58 crc kubenswrapper[4782]: I1124 12:08:58.722042 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:08:59 crc kubenswrapper[4782]: I1124 12:08:59.566995 4782 generic.go:334] "Generic (PLEG): container finished" podID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerID="c5ad6480788f34a0eec6439f1b0e56507f749eae9f49d0df204ddb794b6d9492" exitCode=0 Nov 24 12:08:59 crc kubenswrapper[4782]: I1124 12:08:59.567054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerDied","Data":"c5ad6480788f34a0eec6439f1b0e56507f749eae9f49d0df204ddb794b6d9492"} Nov 24 12:09:01 crc kubenswrapper[4782]: I1124 12:09:01.579996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerStarted","Data":"4964354d35fac5529840f449328ed3d7693ab00f73847d093f0e941776ccaa90"} Nov 24 12:09:01 crc kubenswrapper[4782]: I1124 12:09:01.599322 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mzqz" podStartSLOduration=2.638583158 podStartE2EDuration="5.599307161s" podCreationTimestamp="2025-11-24 12:08:56 +0000 UTC" firstStartedPulling="2025-11-24 12:08:57.530829894 +0000 UTC m=+786.774663653" lastFinishedPulling="2025-11-24 12:09:00.491553887 +0000 UTC m=+789.735387656" observedRunningTime="2025-11-24 12:09:01.595788913 +0000 UTC m=+790.839622692" watchObservedRunningTime="2025-11-24 12:09:01.599307161 +0000 UTC m=+790.843140950" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.746230 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qkfdv"] Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.747197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.749186 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.749362 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d8pn6" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.749500 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.765738 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qkfdv"] Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.882367 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zmb\" (UniqueName: \"kubernetes.io/projected/f26694c9-e51b-4e17-b20c-eafbb8164ba8-kube-api-access-g4zmb\") pod \"openstack-operator-index-qkfdv\" (UID: \"f26694c9-e51b-4e17-b20c-eafbb8164ba8\") " pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:02 crc kubenswrapper[4782]: I1124 12:09:02.983093 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zmb\" (UniqueName: \"kubernetes.io/projected/f26694c9-e51b-4e17-b20c-eafbb8164ba8-kube-api-access-g4zmb\") pod \"openstack-operator-index-qkfdv\" (UID: \"f26694c9-e51b-4e17-b20c-eafbb8164ba8\") " pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.012781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zmb\" (UniqueName: \"kubernetes.io/projected/f26694c9-e51b-4e17-b20c-eafbb8164ba8-kube-api-access-g4zmb\") pod \"openstack-operator-index-qkfdv\" (UID: \"f26694c9-e51b-4e17-b20c-eafbb8164ba8\") " pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.108063 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.249694 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.275262 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-krmz8" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.300140 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.333992 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qkfdv"] Nov 24 12:09:03 crc kubenswrapper[4782]: W1124 12:09:03.342568 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf26694c9_e51b_4e17_b20c_eafbb8164ba8.slice/crio-4f0bb8e2472dbae695dd207ffcfe16d22bcc10a7b44cf72a51fa17c8812d5a0f WatchSource:0}: Error finding container 4f0bb8e2472dbae695dd207ffcfe16d22bcc10a7b44cf72a51fa17c8812d5a0f: Status 404 returned error can't find the container with id 4f0bb8e2472dbae695dd207ffcfe16d22bcc10a7b44cf72a51fa17c8812d5a0f Nov 24 12:09:03 crc kubenswrapper[4782]: I1124 12:09:03.593819 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkfdv" event={"ID":"f26694c9-e51b-4e17-b20c-eafbb8164ba8","Type":"ContainerStarted","Data":"4f0bb8e2472dbae695dd207ffcfe16d22bcc10a7b44cf72a51fa17c8812d5a0f"} Nov 24 12:09:06 crc kubenswrapper[4782]: I1124 12:09:06.152589 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:09:06 crc kubenswrapper[4782]: I1124 12:09:06.468727 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:06 crc kubenswrapper[4782]: I1124 12:09:06.469063 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:06 crc kubenswrapper[4782]: I1124 12:09:06.518800 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:06 crc kubenswrapper[4782]: I1124 12:09:06.656482 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:07 crc kubenswrapper[4782]: I1124 12:09:07.541179 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:09:08 crc kubenswrapper[4782]: I1124 12:09:08.627314 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mzqz" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="registry-server" containerID="cri-o://4964354d35fac5529840f449328ed3d7693ab00f73847d093f0e941776ccaa90" gracePeriod=2 Nov 24 12:09:08 crc kubenswrapper[4782]: I1124 12:09:08.740955 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.636733 4782 generic.go:334] "Generic (PLEG): container finished" podID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerID="4964354d35fac5529840f449328ed3d7693ab00f73847d093f0e941776ccaa90" exitCode=0 Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.636993 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerDied","Data":"4964354d35fac5529840f449328ed3d7693ab00f73847d093f0e941776ccaa90"} Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.739891 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.740173 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tr6jg" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="registry-server" containerID="cri-o://fca49b4942a3f36ba809111b16e1eaca0e8de4ad334dd73755721276bbe90c2a" gracePeriod=2 Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.751048 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.788004 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities\") pod \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.788091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content\") pod \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.788171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4ps7\" (UniqueName: \"kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7\") pod \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\" (UID: \"29d3c176-5c9e-4191-b8fa-cb8433ebca53\") " Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.789367 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities" (OuterVolumeSpecName: "utilities") pod "29d3c176-5c9e-4191-b8fa-cb8433ebca53" (UID: "29d3c176-5c9e-4191-b8fa-cb8433ebca53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.804332 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7" (OuterVolumeSpecName: "kube-api-access-j4ps7") pod "29d3c176-5c9e-4191-b8fa-cb8433ebca53" (UID: "29d3c176-5c9e-4191-b8fa-cb8433ebca53"). InnerVolumeSpecName "kube-api-access-j4ps7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.888728 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4ps7\" (UniqueName: \"kubernetes.io/projected/29d3c176-5c9e-4191-b8fa-cb8433ebca53-kube-api-access-j4ps7\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.888760 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.888724 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d3c176-5c9e-4191-b8fa-cb8433ebca53" (UID: "29d3c176-5c9e-4191-b8fa-cb8433ebca53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:09 crc kubenswrapper[4782]: I1124 12:09:09.990358 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c176-5c9e-4191-b8fa-cb8433ebca53-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.644036 4782 generic.go:334] "Generic (PLEG): container finished" podID="a541b7c2-440d-4596-9106-04be15c9c714" containerID="fca49b4942a3f36ba809111b16e1eaca0e8de4ad334dd73755721276bbe90c2a" exitCode=0 Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.644112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerDied","Data":"fca49b4942a3f36ba809111b16e1eaca0e8de4ad334dd73755721276bbe90c2a"} Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.647178 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mzqz" event={"ID":"29d3c176-5c9e-4191-b8fa-cb8433ebca53","Type":"ContainerDied","Data":"41c238646b26fbe3c8d56df13f0701c418f3b40f841cfb54888efe176e4bc9b0"} Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.647222 4782 scope.go:117] "RemoveContainer" containerID="4964354d35fac5529840f449328ed3d7693ab00f73847d093f0e941776ccaa90" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.647232 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mzqz" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.685238 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.688433 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7mzqz"] Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.725432 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.803938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities\") pod \"a541b7c2-440d-4596-9106-04be15c9c714\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.804021 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content\") pod \"a541b7c2-440d-4596-9106-04be15c9c714\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.804063 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq6zw\" (UniqueName: \"kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw\") pod \"a541b7c2-440d-4596-9106-04be15c9c714\" (UID: \"a541b7c2-440d-4596-9106-04be15c9c714\") " Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.805009 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities" (OuterVolumeSpecName: "utilities") pod "a541b7c2-440d-4596-9106-04be15c9c714" (UID: "a541b7c2-440d-4596-9106-04be15c9c714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.808525 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw" (OuterVolumeSpecName: "kube-api-access-tq6zw") pod "a541b7c2-440d-4596-9106-04be15c9c714" (UID: "a541b7c2-440d-4596-9106-04be15c9c714"). InnerVolumeSpecName "kube-api-access-tq6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.821481 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a541b7c2-440d-4596-9106-04be15c9c714" (UID: "a541b7c2-440d-4596-9106-04be15c9c714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.905335 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.905398 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a541b7c2-440d-4596-9106-04be15c9c714-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:10 crc kubenswrapper[4782]: I1124 12:09:10.905417 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq6zw\" (UniqueName: \"kubernetes.io/projected/a541b7c2-440d-4596-9106-04be15c9c714-kube-api-access-tq6zw\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.371914 4782 scope.go:117] "RemoveContainer" containerID="c5ad6480788f34a0eec6439f1b0e56507f749eae9f49d0df204ddb794b6d9492" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.478274 4782 scope.go:117] "RemoveContainer" containerID="c39be4a505f522ca47107047a7eaf3af5e6def88c4c8009be2e62305af67166f" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.502997 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" path="/var/lib/kubelet/pods/29d3c176-5c9e-4191-b8fa-cb8433ebca53/volumes" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.658524 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6jg" event={"ID":"a541b7c2-440d-4596-9106-04be15c9c714","Type":"ContainerDied","Data":"6935de890d8c756ad4449453038ca1386203554908a26d14b5bb0b21ea74fcaa"} Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.658569 4782 scope.go:117] "RemoveContainer" containerID="fca49b4942a3f36ba809111b16e1eaca0e8de4ad334dd73755721276bbe90c2a" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.658650 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6jg" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.674581 4782 scope.go:117] "RemoveContainer" containerID="a60005f2cbd4042e764f3901a5c12e422a3a1a50df67a796f4e945c8a962e629" Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.686976 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.692886 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6jg"] Nov 24 12:09:11 crc kubenswrapper[4782]: I1124 12:09:11.709298 4782 scope.go:117] "RemoveContainer" containerID="e298cc081fd4c168eb7e63ebd7dff120ee0b5d8d01324db2c7b5bb8b936e0889" Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.535698 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.535959 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7fnr7" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="registry-server" containerID="cri-o://232faff9a40296cdbafe40e87e853b1eebc5815fa8ca5509a0b095ab56357d02" gracePeriod=2 Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.671141 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkfdv" event={"ID":"f26694c9-e51b-4e17-b20c-eafbb8164ba8","Type":"ContainerStarted","Data":"58e90104b453a51a53351de61e5782e1cfd5480cad23812854e9433ab8df30a9"} Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.674940 4782 generic.go:334] "Generic (PLEG): container finished" podID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerID="232faff9a40296cdbafe40e87e853b1eebc5815fa8ca5509a0b095ab56357d02" exitCode=0 Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.674978 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerDied","Data":"232faff9a40296cdbafe40e87e853b1eebc5815fa8ca5509a0b095ab56357d02"} Nov 24 12:09:12 crc kubenswrapper[4782]: I1124 12:09:12.995259 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.019229 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qkfdv" podStartSLOduration=2.822879573 podStartE2EDuration="11.019212083s" podCreationTimestamp="2025-11-24 12:09:02 +0000 UTC" firstStartedPulling="2025-11-24 12:09:03.344417614 +0000 UTC m=+792.588251373" lastFinishedPulling="2025-11-24 12:09:11.540750104 +0000 UTC m=+800.784583883" observedRunningTime="2025-11-24 12:09:12.692721301 +0000 UTC m=+801.936555070" watchObservedRunningTime="2025-11-24 12:09:13.019212083 +0000 UTC m=+802.263045852" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.056494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content\") pod \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.056570 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities\") pod \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.056624 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl4k2\" (UniqueName: \"kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2\") pod \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\" (UID: \"056f27e0-7e61-451e-8f1c-3da5aea2f91e\") " Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.057554 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities" (OuterVolumeSpecName: "utilities") pod "056f27e0-7e61-451e-8f1c-3da5aea2f91e" (UID: "056f27e0-7e61-451e-8f1c-3da5aea2f91e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.069519 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2" (OuterVolumeSpecName: "kube-api-access-bl4k2") pod "056f27e0-7e61-451e-8f1c-3da5aea2f91e" (UID: "056f27e0-7e61-451e-8f1c-3da5aea2f91e"). InnerVolumeSpecName "kube-api-access-bl4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.109754 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.109821 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.115093 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "056f27e0-7e61-451e-8f1c-3da5aea2f91e" (UID: "056f27e0-7e61-451e-8f1c-3da5aea2f91e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.157662 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.157954 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056f27e0-7e61-451e-8f1c-3da5aea2f91e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.158046 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl4k2\" (UniqueName: \"kubernetes.io/projected/056f27e0-7e61-451e-8f1c-3da5aea2f91e-kube-api-access-bl4k2\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.158970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.249556 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bk5fw" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.499310 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a541b7c2-440d-4596-9106-04be15c9c714" path="/var/lib/kubelet/pods/a541b7c2-440d-4596-9106-04be15c9c714/volumes" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.681778 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fnr7" event={"ID":"056f27e0-7e61-451e-8f1c-3da5aea2f91e","Type":"ContainerDied","Data":"84940a821a875f2af96f127dc8c3e8a5117b17c9d9c97d54684d76f8d44f62ce"} Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.681832 4782 scope.go:117] "RemoveContainer" containerID="232faff9a40296cdbafe40e87e853b1eebc5815fa8ca5509a0b095ab56357d02" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.681794 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fnr7" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.695854 4782 scope.go:117] "RemoveContainer" containerID="17e23e76a68c7463fa66d8e07cb8c261fd079c75f9a72507ca65490faa48bd81" Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.702241 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.705644 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7fnr7"] Nov 24 12:09:13 crc kubenswrapper[4782]: I1124 12:09:13.710613 4782 scope.go:117] "RemoveContainer" containerID="8b8dccb1af8c93f2388db9b07db491e16eef2e5f6afa08097f6412f67387e936" Nov 24 12:09:15 crc kubenswrapper[4782]: I1124 12:09:15.501653 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" path="/var/lib/kubelet/pods/056f27e0-7e61-451e-8f1c-3da5aea2f91e/volumes" Nov 24 12:09:23 crc kubenswrapper[4782]: I1124 12:09:23.134505 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qkfdv" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776522 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh"] Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776746 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776758 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776771 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776779 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776789 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776795 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776808 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776814 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776838 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776844 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776855 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776860 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776867 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776873 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776880 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776886 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="extract-utilities" Nov 24 12:09:24 crc kubenswrapper[4782]: E1124 12:09:24.776894 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.776899 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="extract-content" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.777000 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a541b7c2-440d-4596-9106-04be15c9c714" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.777015 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="056f27e0-7e61-451e-8f1c-3da5aea2f91e" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.777021 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3c176-5c9e-4191-b8fa-cb8433ebca53" containerName="registry-server" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.777812 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.780108 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-27vfz" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.800798 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh"] Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.908888 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.908935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:24 crc kubenswrapper[4782]: I1124 12:09:24.908994 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcb5\" (UniqueName: \"kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.010225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.010930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.010744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.011067 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcb5\" (UniqueName: \"kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.011477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.058486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcb5\" (UniqueName: \"kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5\") pod \"ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.095572 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.529112 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh"] Nov 24 12:09:25 crc kubenswrapper[4782]: I1124 12:09:25.749497 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" event={"ID":"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29","Type":"ContainerStarted","Data":"d7ed58427c92324f7db45c4a50c097a9c5066ff7d0a068937458ebd25929a8d5"} Nov 24 12:09:26 crc kubenswrapper[4782]: I1124 12:09:26.758165 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerID="a9c0b3e97e361724941033845ac2e7064a2d2ebe316422c383cefe3360ef94a0" exitCode=0 Nov 24 12:09:26 crc kubenswrapper[4782]: I1124 12:09:26.758206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" event={"ID":"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29","Type":"ContainerDied","Data":"a9c0b3e97e361724941033845ac2e7064a2d2ebe316422c383cefe3360ef94a0"} Nov 24 12:09:27 crc kubenswrapper[4782]: I1124 12:09:27.765464 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerID="01d44f5d4583a5eddc1224e7296ab07479c809c2bd536308c5a4252e03edc1dc" exitCode=0 Nov 24 12:09:27 crc kubenswrapper[4782]: I1124 12:09:27.765503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" event={"ID":"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29","Type":"ContainerDied","Data":"01d44f5d4583a5eddc1224e7296ab07479c809c2bd536308c5a4252e03edc1dc"} Nov 24 12:09:28 crc kubenswrapper[4782]: I1124 12:09:28.774582 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerID="f2bf28d16d5063529808caea18e5412dfcbc5650f64f67463e2c4cb9d971cba1" exitCode=0 Nov 24 12:09:28 crc kubenswrapper[4782]: I1124 12:09:28.774869 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" event={"ID":"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29","Type":"ContainerDied","Data":"f2bf28d16d5063529808caea18e5412dfcbc5650f64f67463e2c4cb9d971cba1"} Nov 24 12:09:29 crc kubenswrapper[4782]: I1124 12:09:29.992948 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.180448 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkcb5\" (UniqueName: \"kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5\") pod \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.180559 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util\") pod \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.180659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle\") pod \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\" (UID: \"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29\") " Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.181632 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle" (OuterVolumeSpecName: "bundle") pod "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" (UID: "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.185620 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5" (OuterVolumeSpecName: "kube-api-access-pkcb5") pod "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" (UID: "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29"). InnerVolumeSpecName "kube-api-access-pkcb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.195018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util" (OuterVolumeSpecName: "util") pod "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" (UID: "1d52cfc6-407b-4fc0-9ea9-126bd1aedc29"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.281940 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkcb5\" (UniqueName: \"kubernetes.io/projected/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-kube-api-access-pkcb5\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.281980 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.281990 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d52cfc6-407b-4fc0-9ea9-126bd1aedc29-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.788940 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" event={"ID":"1d52cfc6-407b-4fc0-9ea9-126bd1aedc29","Type":"ContainerDied","Data":"d7ed58427c92324f7db45c4a50c097a9c5066ff7d0a068937458ebd25929a8d5"} Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.788986 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ed58427c92324f7db45c4a50c097a9c5066ff7d0a068937458ebd25929a8d5" Nov 24 12:09:30 crc kubenswrapper[4782]: I1124 12:09:30.789088 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.711816 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp"] Nov 24 12:09:33 crc kubenswrapper[4782]: E1124 12:09:33.712321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="extract" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.712335 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="extract" Nov 24 12:09:33 crc kubenswrapper[4782]: E1124 12:09:33.712345 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="util" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.712351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="util" Nov 24 12:09:33 crc kubenswrapper[4782]: E1124 12:09:33.712361 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="pull" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.712367 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="pull" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.712491 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d52cfc6-407b-4fc0-9ea9-126bd1aedc29" containerName="extract" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.712881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.716048 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-l4gzq" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.734219 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp"] Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.824150 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl248\" (UniqueName: \"kubernetes.io/projected/8aef8676-912f-4585-a5bb-a494867bf2e9-kube-api-access-jl248\") pod \"openstack-operator-controller-operator-5bd4c479c8-db2zp\" (UID: \"8aef8676-912f-4585-a5bb-a494867bf2e9\") " pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.925605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl248\" (UniqueName: \"kubernetes.io/projected/8aef8676-912f-4585-a5bb-a494867bf2e9-kube-api-access-jl248\") pod \"openstack-operator-controller-operator-5bd4c479c8-db2zp\" (UID: \"8aef8676-912f-4585-a5bb-a494867bf2e9\") " pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:33 crc kubenswrapper[4782]: I1124 12:09:33.948327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl248\" (UniqueName: \"kubernetes.io/projected/8aef8676-912f-4585-a5bb-a494867bf2e9-kube-api-access-jl248\") pod \"openstack-operator-controller-operator-5bd4c479c8-db2zp\" (UID: \"8aef8676-912f-4585-a5bb-a494867bf2e9\") " pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:34 crc kubenswrapper[4782]: I1124 12:09:34.032082 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:34 crc kubenswrapper[4782]: I1124 12:09:34.469088 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp"] Nov 24 12:09:34 crc kubenswrapper[4782]: I1124 12:09:34.812289 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" event={"ID":"8aef8676-912f-4585-a5bb-a494867bf2e9","Type":"ContainerStarted","Data":"130119dedff579bdce438e0a3b06797c37c87a328b447a32f09dcaaad6dcef10"} Nov 24 12:09:40 crc kubenswrapper[4782]: I1124 12:09:40.847599 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" event={"ID":"8aef8676-912f-4585-a5bb-a494867bf2e9","Type":"ContainerStarted","Data":"db2186b93cb99362d049a2abfd2288d733e6fb5a4eade6d98f22dfcc56b9a5f9"} Nov 24 12:09:40 crc kubenswrapper[4782]: I1124 12:09:40.848244 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:54 crc kubenswrapper[4782]: I1124 12:09:54.035745 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" Nov 24 12:09:54 crc kubenswrapper[4782]: I1124 12:09:54.078630 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5bd4c479c8-db2zp" podStartSLOduration=15.689298932 podStartE2EDuration="21.078607048s" podCreationTimestamp="2025-11-24 12:09:33 +0000 UTC" firstStartedPulling="2025-11-24 12:09:34.48032513 +0000 UTC m=+823.724158899" lastFinishedPulling="2025-11-24 12:09:39.869633246 +0000 UTC m=+829.113467015" observedRunningTime="2025-11-24 12:09:40.876231443 +0000 UTC m=+830.120065242" watchObservedRunningTime="2025-11-24 12:09:54.078607048 +0000 UTC m=+843.322440827" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.365165 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.366648 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.401921 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.471445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dvkm\" (UniqueName: \"kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.471512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.471624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.573059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.573121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dvkm\" (UniqueName: \"kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.573155 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.573690 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.574590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.596613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dvkm\" (UniqueName: \"kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm\") pod \"certified-operators-m676z\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:09:59 crc kubenswrapper[4782]: I1124 12:09:59.691599 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:00 crc kubenswrapper[4782]: I1124 12:10:00.152978 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:10:00 crc kubenswrapper[4782]: I1124 12:10:00.983517 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerID="65f0795879ff28e75c58f11d44eab31a4057c5b68722613486bf864e62ae87f9" exitCode=0 Nov 24 12:10:00 crc kubenswrapper[4782]: I1124 12:10:00.983563 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerDied","Data":"65f0795879ff28e75c58f11d44eab31a4057c5b68722613486bf864e62ae87f9"} Nov 24 12:10:00 crc kubenswrapper[4782]: I1124 12:10:00.983587 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerStarted","Data":"e16bd47b7cab0bcb3fb363a44d40fe155346cbdb57116df8fda91d35a67499ea"} Nov 24 12:10:03 crc kubenswrapper[4782]: I1124 12:10:03.000296 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerID="b04cacfc89e7ff32f50fa88767d86243bbd7e4f1821d1568832c7bfa9ab0021b" exitCode=0 Nov 24 12:10:03 crc kubenswrapper[4782]: I1124 12:10:03.000395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerDied","Data":"b04cacfc89e7ff32f50fa88767d86243bbd7e4f1821d1568832c7bfa9ab0021b"} Nov 24 12:10:04 crc kubenswrapper[4782]: I1124 12:10:04.008797 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerStarted","Data":"4aa3594f798820a804be94e50254eb255a9d5a4fd6ee0ae65930abf50157a14e"} Nov 24 12:10:04 crc kubenswrapper[4782]: I1124 12:10:04.034925 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m676z" podStartSLOduration=2.475571252 podStartE2EDuration="5.034911758s" podCreationTimestamp="2025-11-24 12:09:59 +0000 UTC" firstStartedPulling="2025-11-24 12:10:00.984902497 +0000 UTC m=+850.228736266" lastFinishedPulling="2025-11-24 12:10:03.544243003 +0000 UTC m=+852.788076772" observedRunningTime="2025-11-24 12:10:04.033176634 +0000 UTC m=+853.277010403" watchObservedRunningTime="2025-11-24 12:10:04.034911758 +0000 UTC m=+853.278745527" Nov 24 12:10:09 crc kubenswrapper[4782]: I1124 12:10:09.692364 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:09 crc kubenswrapper[4782]: I1124 12:10:09.692986 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:09 crc kubenswrapper[4782]: I1124 12:10:09.739651 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:10 crc kubenswrapper[4782]: I1124 12:10:10.108112 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:10 crc kubenswrapper[4782]: I1124 12:10:10.169496 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:10:12 crc kubenswrapper[4782]: I1124 12:10:12.068613 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m676z" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="registry-server" containerID="cri-o://4aa3594f798820a804be94e50254eb255a9d5a4fd6ee0ae65930abf50157a14e" gracePeriod=2 Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.081207 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerID="4aa3594f798820a804be94e50254eb255a9d5a4fd6ee0ae65930abf50157a14e" exitCode=0 Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.081253 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerDied","Data":"4aa3594f798820a804be94e50254eb255a9d5a4fd6ee0ae65930abf50157a14e"} Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.605942 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.753533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content\") pod \"eb803a5a-0e5d-4526-a202-eac65ea2c609\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.753915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities\") pod \"eb803a5a-0e5d-4526-a202-eac65ea2c609\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.754928 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities" (OuterVolumeSpecName: "utilities") pod "eb803a5a-0e5d-4526-a202-eac65ea2c609" (UID: "eb803a5a-0e5d-4526-a202-eac65ea2c609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.755667 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dvkm\" (UniqueName: \"kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm\") pod \"eb803a5a-0e5d-4526-a202-eac65ea2c609\" (UID: \"eb803a5a-0e5d-4526-a202-eac65ea2c609\") " Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.756742 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.762036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm" (OuterVolumeSpecName: "kube-api-access-9dvkm") pod "eb803a5a-0e5d-4526-a202-eac65ea2c609" (UID: "eb803a5a-0e5d-4526-a202-eac65ea2c609"). InnerVolumeSpecName "kube-api-access-9dvkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:13 crc kubenswrapper[4782]: I1124 12:10:13.857579 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dvkm\" (UniqueName: \"kubernetes.io/projected/eb803a5a-0e5d-4526-a202-eac65ea2c609-kube-api-access-9dvkm\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.089581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m676z" event={"ID":"eb803a5a-0e5d-4526-a202-eac65ea2c609","Type":"ContainerDied","Data":"e16bd47b7cab0bcb3fb363a44d40fe155346cbdb57116df8fda91d35a67499ea"} Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.089627 4782 scope.go:117] "RemoveContainer" containerID="4aa3594f798820a804be94e50254eb255a9d5a4fd6ee0ae65930abf50157a14e" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.089736 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m676z" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.109592 4782 scope.go:117] "RemoveContainer" containerID="b04cacfc89e7ff32f50fa88767d86243bbd7e4f1821d1568832c7bfa9ab0021b" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.133533 4782 scope.go:117] "RemoveContainer" containerID="65f0795879ff28e75c58f11d44eab31a4057c5b68722613486bf864e62ae87f9" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.195486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb803a5a-0e5d-4526-a202-eac65ea2c609" (UID: "eb803a5a-0e5d-4526-a202-eac65ea2c609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.263787 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb803a5a-0e5d-4526-a202-eac65ea2c609-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.413786 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:10:14 crc kubenswrapper[4782]: I1124 12:10:14.418932 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m676z"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.312016 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc"] Nov 24 12:10:15 crc kubenswrapper[4782]: E1124 12:10:15.312593 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="extract-content" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.312606 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="extract-content" Nov 24 12:10:15 crc kubenswrapper[4782]: E1124 12:10:15.312629 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="registry-server" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.312637 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="registry-server" Nov 24 12:10:15 crc kubenswrapper[4782]: E1124 12:10:15.312655 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="extract-utilities" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.312663 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="extract-utilities" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.312783 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" containerName="registry-server" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.313549 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.317592 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4cb56" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.335381 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.347142 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.348089 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.354116 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dk52b" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.371290 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.372172 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.378088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bkzc5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.388793 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddcr\" (UniqueName: \"kubernetes.io/projected/9adec34d-0e3d-4f65-80b2-4ba1c0731be4-kube-api-access-tddcr\") pod \"barbican-operator-controller-manager-86dc4d89c8-kr5jc\" (UID: \"9adec34d-0e3d-4f65-80b2-4ba1c0731be4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.393795 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.426003 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wmsss"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.427206 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.434549 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-s82h7" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.444629 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.445844 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.448654 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-566dt" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.474016 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.475203 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.476432 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.490877 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wmsss"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.491628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7drz\" (UniqueName: \"kubernetes.io/projected/2ae11a51-1628-454f-8b78-77e9aaa2691b-kube-api-access-l7drz\") pod \"heat-operator-controller-manager-774b86978c-wmsss\" (UID: \"2ae11a51-1628-454f-8b78-77e9aaa2691b\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.491785 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cb24\" (UniqueName: \"kubernetes.io/projected/5ace8cad-a0d4-4ba1-99f8-a097edd76a74-kube-api-access-5cb24\") pod \"designate-operator-controller-manager-7d695c9b56-czqnv\" (UID: \"5ace8cad-a0d4-4ba1-99f8-a097edd76a74\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.491953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddcr\" (UniqueName: \"kubernetes.io/projected/9adec34d-0e3d-4f65-80b2-4ba1c0731be4-kube-api-access-tddcr\") pod \"barbican-operator-controller-manager-86dc4d89c8-kr5jc\" (UID: \"9adec34d-0e3d-4f65-80b2-4ba1c0731be4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.492114 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65wk\" (UniqueName: \"kubernetes.io/projected/45393058-140b-48ea-9691-9bbe0740342b-kube-api-access-k65wk\") pod \"cinder-operator-controller-manager-79856dc55c-k5z2n\" (UID: \"45393058-140b-48ea-9691-9bbe0740342b\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.502349 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb803a5a-0e5d-4526-a202-eac65ea2c609" path="/var/lib/kubelet/pods/eb803a5a-0e5d-4526-a202-eac65ea2c609/volumes" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.503224 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.503257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.503654 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9kvpj" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.546502 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.547823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.551520 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-f8nd4" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.552574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddcr\" (UniqueName: \"kubernetes.io/projected/9adec34d-0e3d-4f65-80b2-4ba1c0731be4-kube-api-access-tddcr\") pod \"barbican-operator-controller-manager-86dc4d89c8-kr5jc\" (UID: \"9adec34d-0e3d-4f65-80b2-4ba1c0731be4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.573869 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.582443 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.586911 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.593295 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlcbx\" (UniqueName: \"kubernetes.io/projected/ba20509b-c083-42f1-bf39-be2ed4a463f7-kube-api-access-vlcbx\") pod \"glance-operator-controller-manager-68b95954c9-kmhd6\" (UID: \"ba20509b-c083-42f1-bf39-be2ed4a463f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.593417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htsjt\" (UniqueName: \"kubernetes.io/projected/e6982d2e-f7d3-4374-bc66-7949d3bcc062-kube-api-access-htsjt\") pod \"horizon-operator-controller-manager-68c9694994-nqd5j\" (UID: \"e6982d2e-f7d3-4374-bc66-7949d3bcc062\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.593488 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65wk\" (UniqueName: \"kubernetes.io/projected/45393058-140b-48ea-9691-9bbe0740342b-kube-api-access-k65wk\") pod \"cinder-operator-controller-manager-79856dc55c-k5z2n\" (UID: \"45393058-140b-48ea-9691-9bbe0740342b\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.593536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7drz\" (UniqueName: \"kubernetes.io/projected/2ae11a51-1628-454f-8b78-77e9aaa2691b-kube-api-access-l7drz\") pod \"heat-operator-controller-manager-774b86978c-wmsss\" (UID: \"2ae11a51-1628-454f-8b78-77e9aaa2691b\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.593562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cb24\" (UniqueName: \"kubernetes.io/projected/5ace8cad-a0d4-4ba1-99f8-a097edd76a74-kube-api-access-5cb24\") pod \"designate-operator-controller-manager-7d695c9b56-czqnv\" (UID: \"5ace8cad-a0d4-4ba1-99f8-a097edd76a74\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.596576 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hssft" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.605708 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.606673 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.610477 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.610669 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vlsd4" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.631920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7drz\" (UniqueName: \"kubernetes.io/projected/2ae11a51-1628-454f-8b78-77e9aaa2691b-kube-api-access-l7drz\") pod \"heat-operator-controller-manager-774b86978c-wmsss\" (UID: \"2ae11a51-1628-454f-8b78-77e9aaa2691b\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.632168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.643716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cb24\" (UniqueName: \"kubernetes.io/projected/5ace8cad-a0d4-4ba1-99f8-a097edd76a74-kube-api-access-5cb24\") pod \"designate-operator-controller-manager-7d695c9b56-czqnv\" (UID: \"5ace8cad-a0d4-4ba1-99f8-a097edd76a74\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.661229 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.663050 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65wk\" (UniqueName: \"kubernetes.io/projected/45393058-140b-48ea-9691-9bbe0740342b-kube-api-access-k65wk\") pod \"cinder-operator-controller-manager-79856dc55c-k5z2n\" (UID: \"45393058-140b-48ea-9691-9bbe0740342b\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.664814 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.668014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.699953 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.700873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hf2t\" (UniqueName: \"kubernetes.io/projected/61b6c96b-b73c-47b5-8e05-988870f4587f-kube-api-access-7hf2t\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.700949 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8x88\" (UniqueName: \"kubernetes.io/projected/6b6efe11-117c-42a3-baa5-b43b07557e43-kube-api-access-k8x88\") pod \"ironic-operator-controller-manager-5bfcdc958c-whlt5\" (UID: \"6b6efe11-117c-42a3-baa5-b43b07557e43\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.700973 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p2m\" (UniqueName: \"kubernetes.io/projected/7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c-kube-api-access-m7p2m\") pod \"keystone-operator-controller-manager-7b854ddf99-pb2wn\" (UID: \"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c\") " pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.700997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.701024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlcbx\" (UniqueName: \"kubernetes.io/projected/ba20509b-c083-42f1-bf39-be2ed4a463f7-kube-api-access-vlcbx\") pod \"glance-operator-controller-manager-68b95954c9-kmhd6\" (UID: \"ba20509b-c083-42f1-bf39-be2ed4a463f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.701057 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htsjt\" (UniqueName: \"kubernetes.io/projected/e6982d2e-f7d3-4374-bc66-7949d3bcc062-kube-api-access-htsjt\") pod \"horizon-operator-controller-manager-68c9694994-nqd5j\" (UID: \"e6982d2e-f7d3-4374-bc66-7949d3bcc062\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.724121 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.725089 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.733443 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nskh2" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.733974 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htsjt\" (UniqueName: \"kubernetes.io/projected/e6982d2e-f7d3-4374-bc66-7949d3bcc062-kube-api-access-htsjt\") pod \"horizon-operator-controller-manager-68c9694994-nqd5j\" (UID: \"e6982d2e-f7d3-4374-bc66-7949d3bcc062\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.741010 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.743221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlcbx\" (UniqueName: \"kubernetes.io/projected/ba20509b-c083-42f1-bf39-be2ed4a463f7-kube-api-access-vlcbx\") pod \"glance-operator-controller-manager-68b95954c9-kmhd6\" (UID: \"ba20509b-c083-42f1-bf39-be2ed4a463f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.759560 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.771529 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.774416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.780093 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-c2nvt" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.789900 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.804505 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8x88\" (UniqueName: \"kubernetes.io/projected/6b6efe11-117c-42a3-baa5-b43b07557e43-kube-api-access-k8x88\") pod \"ironic-operator-controller-manager-5bfcdc958c-whlt5\" (UID: \"6b6efe11-117c-42a3-baa5-b43b07557e43\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.804563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p2m\" (UniqueName: \"kubernetes.io/projected/7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c-kube-api-access-m7p2m\") pod \"keystone-operator-controller-manager-7b854ddf99-pb2wn\" (UID: \"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c\") " pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.804613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.804693 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47fw2\" (UniqueName: \"kubernetes.io/projected/ef74c0aa-ac31-49b1-861d-258fe0a3ddff-kube-api-access-47fw2\") pod \"manila-operator-controller-manager-58bb8d67cc-ctr6x\" (UID: \"ef74c0aa-ac31-49b1-861d-258fe0a3ddff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.804741 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hf2t\" (UniqueName: \"kubernetes.io/projected/61b6c96b-b73c-47b5-8e05-988870f4587f-kube-api-access-7hf2t\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.805191 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.805228 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2"] Nov 24 12:10:15 crc kubenswrapper[4782]: E1124 12:10:15.805343 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 12:10:15 crc kubenswrapper[4782]: E1124 12:10:15.805400 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert podName:61b6c96b-b73c-47b5-8e05-988870f4587f nodeName:}" failed. No retries permitted until 2025-11-24 12:10:16.305385287 +0000 UTC m=+865.549219056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert") pod "infra-operator-controller-manager-d5cc86f4b-fjggr" (UID: "61b6c96b-b73c-47b5-8e05-988870f4587f") : secret "infra-operator-webhook-server-cert" not found Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.809486 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.809568 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.813757 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.820120 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lws74" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.847872 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8x88\" (UniqueName: \"kubernetes.io/projected/6b6efe11-117c-42a3-baa5-b43b07557e43-kube-api-access-k8x88\") pod \"ironic-operator-controller-manager-5bfcdc958c-whlt5\" (UID: \"6b6efe11-117c-42a3-baa5-b43b07557e43\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.850482 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p2m\" (UniqueName: \"kubernetes.io/projected/7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c-kube-api-access-m7p2m\") pod \"keystone-operator-controller-manager-7b854ddf99-pb2wn\" (UID: \"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c\") " pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.869606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hf2t\" (UniqueName: \"kubernetes.io/projected/61b6c96b-b73c-47b5-8e05-988870f4587f-kube-api-access-7hf2t\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.871968 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.888685 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.907509 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47fw2\" (UniqueName: \"kubernetes.io/projected/ef74c0aa-ac31-49b1-861d-258fe0a3ddff-kube-api-access-47fw2\") pod \"manila-operator-controller-manager-58bb8d67cc-ctr6x\" (UID: \"ef74c0aa-ac31-49b1-861d-258fe0a3ddff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.907792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzlr\" (UniqueName: \"kubernetes.io/projected/34e6f50e-248f-4ef3-a145-83ccb7616d0d-kube-api-access-8bzlr\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-57dk2\" (UID: \"34e6f50e-248f-4ef3-a145-83ccb7616d0d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.907983 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjmd\" (UniqueName: \"kubernetes.io/projected/9e50a599-1a70-46c9-94a1-d3148778888d-kube-api-access-chjmd\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tmp5g\" (UID: \"9e50a599-1a70-46c9-94a1-d3148778888d\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.915450 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.920958 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.921295 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.929651 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.929712 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.929791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.929811 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xkq7j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.932617 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-t8f6j" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.949281 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.950568 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.953204 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.954845 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.957726 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-w67mr" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.972275 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.972766 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.974295 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.978109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47fw2\" (UniqueName: \"kubernetes.io/projected/ef74c0aa-ac31-49b1-861d-258fe0a3ddff-kube-api-access-47fw2\") pod \"manila-operator-controller-manager-58bb8d67cc-ctr6x\" (UID: \"ef74c0aa-ac31-49b1-861d-258fe0a3ddff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.978786 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hqkjc" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.978950 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.984622 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns"] Nov 24 12:10:15 crc kubenswrapper[4782]: I1124 12:10:15.986180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6dgn8" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzlr\" (UniqueName: \"kubernetes.io/projected/34e6f50e-248f-4ef3-a145-83ccb7616d0d-kube-api-access-8bzlr\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-57dk2\" (UID: \"34e6f50e-248f-4ef3-a145-83ccb7616d0d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010094 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzlr\" (UniqueName: \"kubernetes.io/projected/ec2d6fc2-5418-4263-a351-0422b2d5068d-kube-api-access-jqzlr\") pod \"nova-operator-controller-manager-79556f57fc-2hr2j\" (UID: \"ec2d6fc2-5418-4263-a351-0422b2d5068d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjmd\" (UniqueName: \"kubernetes.io/projected/9e50a599-1a70-46c9-94a1-d3148778888d-kube-api-access-chjmd\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tmp5g\" (UID: \"9e50a599-1a70-46c9-94a1-d3148778888d\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010147 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010179 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmr8q\" (UniqueName: \"kubernetes.io/projected/ecf25e22-396d-4c6d-9585-566ffc0d0092-kube-api-access-fmr8q\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010231 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brk46\" (UniqueName: \"kubernetes.io/projected/a9dcd8ef-dbbf-43dc-97a0-e77d942ff589-kube-api-access-brk46\") pod \"ovn-operator-controller-manager-66cf5c67ff-7pb8k\" (UID: \"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.010251 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8zgn\" (UniqueName: \"kubernetes.io/projected/a09a3b55-484e-461d-9f95-1e3279b323c5-kube-api-access-b8zgn\") pod \"octavia-operator-controller-manager-fd75fd47d-kh9k5\" (UID: \"a09a3b55-484e-461d-9f95-1e3279b323c5\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.047442 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.048514 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.059101 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tf9tr" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.068093 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.073071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzlr\" (UniqueName: \"kubernetes.io/projected/34e6f50e-248f-4ef3-a145-83ccb7616d0d-kube-api-access-8bzlr\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-57dk2\" (UID: \"34e6f50e-248f-4ef3-a145-83ccb7616d0d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.086600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjmd\" (UniqueName: \"kubernetes.io/projected/9e50a599-1a70-46c9-94a1-d3148778888d-kube-api-access-chjmd\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tmp5g\" (UID: \"9e50a599-1a70-46c9-94a1-d3148778888d\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.103141 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.110894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brk46\" (UniqueName: \"kubernetes.io/projected/a9dcd8ef-dbbf-43dc-97a0-e77d942ff589-kube-api-access-brk46\") pod \"ovn-operator-controller-manager-66cf5c67ff-7pb8k\" (UID: \"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.110938 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8zgn\" (UniqueName: \"kubernetes.io/projected/a09a3b55-484e-461d-9f95-1e3279b323c5-kube-api-access-b8zgn\") pod \"octavia-operator-controller-manager-fd75fd47d-kh9k5\" (UID: \"a09a3b55-484e-461d-9f95-1e3279b323c5\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.110971 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lrk\" (UniqueName: \"kubernetes.io/projected/60a3fcad-0c5a-4be2-b89b-4d143d3a8e62-kube-api-access-z7lrk\") pod \"placement-operator-controller-manager-5db546f9d9-k6n5f\" (UID: \"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.111014 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzlr\" (UniqueName: \"kubernetes.io/projected/ec2d6fc2-5418-4263-a351-0422b2d5068d-kube-api-access-jqzlr\") pod \"nova-operator-controller-manager-79556f57fc-2hr2j\" (UID: \"ec2d6fc2-5418-4263-a351-0422b2d5068d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.111042 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrj78\" (UniqueName: \"kubernetes.io/projected/5f8b3ed3-fba7-4a0e-8245-f822c548082e-kube-api-access-jrj78\") pod \"swift-operator-controller-manager-6fdc4fcf86-pdvh7\" (UID: \"5f8b3ed3-fba7-4a0e-8245-f822c548082e\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.111062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.111096 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmr8q\" (UniqueName: \"kubernetes.io/projected/ecf25e22-396d-4c6d-9585-566ffc0d0092-kube-api-access-fmr8q\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.111816 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.111856 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert podName:ecf25e22-396d-4c6d-9585-566ffc0d0092 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:16.611843249 +0000 UTC m=+865.855677018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" (UID: "ecf25e22-396d-4c6d-9585-566ffc0d0092") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.134066 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.138738 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.139835 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.146883 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8zgn\" (UniqueName: \"kubernetes.io/projected/a09a3b55-484e-461d-9f95-1e3279b323c5-kube-api-access-b8zgn\") pod \"octavia-operator-controller-manager-fd75fd47d-kh9k5\" (UID: \"a09a3b55-484e-461d-9f95-1e3279b323c5\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.146956 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.155845 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wmtk6" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.162835 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.166048 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmr8q\" (UniqueName: \"kubernetes.io/projected/ecf25e22-396d-4c6d-9585-566ffc0d0092-kube-api-access-fmr8q\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.177728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brk46\" (UniqueName: \"kubernetes.io/projected/a9dcd8ef-dbbf-43dc-97a0-e77d942ff589-kube-api-access-brk46\") pod \"ovn-operator-controller-manager-66cf5c67ff-7pb8k\" (UID: \"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.177940 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.193775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzlr\" (UniqueName: \"kubernetes.io/projected/ec2d6fc2-5418-4263-a351-0422b2d5068d-kube-api-access-jqzlr\") pod \"nova-operator-controller-manager-79556f57fc-2hr2j\" (UID: \"ec2d6fc2-5418-4263-a351-0422b2d5068d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.216054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lrk\" (UniqueName: \"kubernetes.io/projected/60a3fcad-0c5a-4be2-b89b-4d143d3a8e62-kube-api-access-z7lrk\") pod \"placement-operator-controller-manager-5db546f9d9-k6n5f\" (UID: \"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.216126 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrj78\" (UniqueName: \"kubernetes.io/projected/5f8b3ed3-fba7-4a0e-8245-f822c548082e-kube-api-access-jrj78\") pod \"swift-operator-controller-manager-6fdc4fcf86-pdvh7\" (UID: \"5f8b3ed3-fba7-4a0e-8245-f822c548082e\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.216202 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wnzn\" (UniqueName: \"kubernetes.io/projected/76bc751e-4645-4cb1-bdfe-7e3c6732505b-kube-api-access-4wnzn\") pod \"telemetry-operator-controller-manager-567f98c9d-n6cqm\" (UID: \"76bc751e-4645-4cb1-bdfe-7e3c6732505b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.229771 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.230826 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.236291 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-clxm4"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.237505 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.247342 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-447gh" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.247558 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ndh28" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.265629 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.280202 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.280310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrj78\" (UniqueName: \"kubernetes.io/projected/5f8b3ed3-fba7-4a0e-8245-f822c548082e-kube-api-access-jrj78\") pod \"swift-operator-controller-manager-6fdc4fcf86-pdvh7\" (UID: \"5f8b3ed3-fba7-4a0e-8245-f822c548082e\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.285997 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lrk\" (UniqueName: \"kubernetes.io/projected/60a3fcad-0c5a-4be2-b89b-4d143d3a8e62-kube-api-access-z7lrk\") pod \"placement-operator-controller-manager-5db546f9d9-k6n5f\" (UID: \"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.300774 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-clxm4"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.318410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.318447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wnzn\" (UniqueName: \"kubernetes.io/projected/76bc751e-4645-4cb1-bdfe-7e3c6732505b-kube-api-access-4wnzn\") pod \"telemetry-operator-controller-manager-567f98c9d-n6cqm\" (UID: \"76bc751e-4645-4cb1-bdfe-7e3c6732505b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.318503 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpq7\" (UniqueName: \"kubernetes.io/projected/19e8c85c-675d-433f-8346-878034f14d24-kube-api-access-tbpq7\") pod \"watcher-operator-controller-manager-864885998-clxm4\" (UID: \"19e8c85c-675d-433f-8346-878034f14d24\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.318545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774f6\" (UniqueName: \"kubernetes.io/projected/a0f8d31c-392e-468d-9a86-b5a482dbc6fb-kube-api-access-774f6\") pod \"test-operator-controller-manager-5cb74df96-8xf2r\" (UID: \"a0f8d31c-392e-468d-9a86-b5a482dbc6fb\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.325863 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.327918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b6c96b-b73c-47b5-8e05-988870f4587f-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fjggr\" (UID: \"61b6c96b-b73c-47b5-8e05-988870f4587f\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.388003 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wnzn\" (UniqueName: \"kubernetes.io/projected/76bc751e-4645-4cb1-bdfe-7e3c6732505b-kube-api-access-4wnzn\") pod \"telemetry-operator-controller-manager-567f98c9d-n6cqm\" (UID: \"76bc751e-4645-4cb1-bdfe-7e3c6732505b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.388285 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.419643 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.425251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpq7\" (UniqueName: \"kubernetes.io/projected/19e8c85c-675d-433f-8346-878034f14d24-kube-api-access-tbpq7\") pod \"watcher-operator-controller-manager-864885998-clxm4\" (UID: \"19e8c85c-675d-433f-8346-878034f14d24\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.425473 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774f6\" (UniqueName: \"kubernetes.io/projected/a0f8d31c-392e-468d-9a86-b5a482dbc6fb-kube-api-access-774f6\") pod \"test-operator-controller-manager-5cb74df96-8xf2r\" (UID: \"a0f8d31c-392e-468d-9a86-b5a482dbc6fb\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.443945 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.444674 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.455808 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.455971 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.456084 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4zqkg" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.468605 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.472965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpq7\" (UniqueName: \"kubernetes.io/projected/19e8c85c-675d-433f-8346-878034f14d24-kube-api-access-tbpq7\") pod \"watcher-operator-controller-manager-864885998-clxm4\" (UID: \"19e8c85c-675d-433f-8346-878034f14d24\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.495128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774f6\" (UniqueName: \"kubernetes.io/projected/a0f8d31c-392e-468d-9a86-b5a482dbc6fb-kube-api-access-774f6\") pod \"test-operator-controller-manager-5cb74df96-8xf2r\" (UID: \"a0f8d31c-392e-468d-9a86-b5a482dbc6fb\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.520676 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.526168 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2"] Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.526456 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.530333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlh5\" (UniqueName: \"kubernetes.io/projected/55628383-51b4-4c77-ac10-476769165984-kube-api-access-whlh5\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.530444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.530473 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.591699 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.598675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.638655 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.638728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.638803 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlh5\" (UniqueName: \"kubernetes.io/projected/55628383-51b4-4c77-ac10-476769165984-kube-api-access-whlh5\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.638856 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.638983 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.639083 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs podName:55628383-51b4-4c77-ac10-476769165984 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:17.139053509 +0000 UTC m=+866.382887268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs") pod "openstack-operator-controller-manager-689b7ddfcc-9brt2" (UID: "55628383-51b4-4c77-ac10-476769165984") : secret "webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.639391 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.639421 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs podName:55628383-51b4-4c77-ac10-476769165984 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:17.139413809 +0000 UTC m=+866.383247578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs") pod "openstack-operator-controller-manager-689b7ddfcc-9brt2" (UID: "55628383-51b4-4c77-ac10-476769165984") : secret "metrics-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.639472 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: E1124 12:10:16.639492 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert podName:ecf25e22-396d-4c6d-9585-566ffc0d0092 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:17.639486891 +0000 UTC m=+866.883320650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" (UID: "ecf25e22-396d-4c6d-9585-566ffc0d0092") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.705906 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlh5\" (UniqueName: \"kubernetes.io/projected/55628383-51b4-4c77-ac10-476769165984-kube-api-access-whlh5\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:16 crc kubenswrapper[4782]: I1124 12:10:16.976796 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc"] Nov 24 12:10:17 crc kubenswrapper[4782]: W1124 12:10:17.034566 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9adec34d_0e3d_4f65_80b2_4ba1c0731be4.slice/crio-22f8f8ee40bacd55ee48f77107c5400d748f7642c701abcc66b6015b49dc7cd6 WatchSource:0}: Error finding container 22f8f8ee40bacd55ee48f77107c5400d748f7642c701abcc66b6015b49dc7cd6: Status 404 returned error can't find the container with id 22f8f8ee40bacd55ee48f77107c5400d748f7642c701abcc66b6015b49dc7cd6 Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.052510 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.053403 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.071015 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dmx54" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.100405 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.122924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.156250 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" event={"ID":"ba20509b-c083-42f1-bf39-be2ed4a463f7","Type":"ContainerStarted","Data":"ffdc3d3a8293c1d148813d6d47e46e8ac1f3586979867c6c4eaa9e9ccfd04921"} Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.157109 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" event={"ID":"9adec34d-0e3d-4f65-80b2-4ba1c0731be4","Type":"ContainerStarted","Data":"22f8f8ee40bacd55ee48f77107c5400d748f7642c701abcc66b6015b49dc7cd6"} Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.171473 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.171508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.171574 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7nn\" (UniqueName: \"kubernetes.io/projected/f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a-kube-api-access-wq7nn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4ltpx\" (UID: \"f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.171704 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.171742 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs podName:55628383-51b4-4c77-ac10-476769165984 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:18.17172916 +0000 UTC m=+867.415562929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs") pod "openstack-operator-controller-manager-689b7ddfcc-9brt2" (UID: "55628383-51b4-4c77-ac10-476769165984") : secret "webhook-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.171779 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.171797 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs podName:55628383-51b4-4c77-ac10-476769165984 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:18.171790102 +0000 UTC m=+867.415623871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs") pod "openstack-operator-controller-manager-689b7ddfcc-9brt2" (UID: "55628383-51b4-4c77-ac10-476769165984") : secret "metrics-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.280050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7nn\" (UniqueName: \"kubernetes.io/projected/f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a-kube-api-access-wq7nn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4ltpx\" (UID: \"f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.334319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7nn\" (UniqueName: \"kubernetes.io/projected/f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a-kube-api-access-wq7nn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4ltpx\" (UID: \"f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.393807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.522725 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5"] Nov 24 12:10:17 crc kubenswrapper[4782]: W1124 12:10:17.573578 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b6efe11_117c_42a3_baa5_b43b07557e43.slice/crio-3b28c319625dc01ac0c6b680e1e4f6f7a4277f6f2365d9d6cc782a3a359ca64b WatchSource:0}: Error finding container 3b28c319625dc01ac0c6b680e1e4f6f7a4277f6f2365d9d6cc782a3a359ca64b: Status 404 returned error can't find the container with id 3b28c319625dc01ac0c6b680e1e4f6f7a4277f6f2365d9d6cc782a3a359ca64b Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.580970 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.608302 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.646688 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.656496 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.660636 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.674054 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.704063 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.707198 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: E1124 12:10:17.707311 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert podName:ecf25e22-396d-4c6d-9585-566ffc0d0092 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:19.707277691 +0000 UTC m=+868.951111450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" (UID: "ecf25e22-396d-4c6d-9585-566ffc0d0092") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.830869 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g"] Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.874361 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wmsss"] Nov 24 12:10:17 crc kubenswrapper[4782]: W1124 12:10:17.893364 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae11a51_1628_454f_8b78_77e9aaa2691b.slice/crio-067d76d0f831c0aa397ca93501c0dd83742947d08f64913dc825abcbf2e76ac7 WatchSource:0}: Error finding container 067d76d0f831c0aa397ca93501c0dd83742947d08f64913dc825abcbf2e76ac7: Status 404 returned error can't find the container with id 067d76d0f831c0aa397ca93501c0dd83742947d08f64913dc825abcbf2e76ac7 Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.898169 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j"] Nov 24 12:10:17 crc kubenswrapper[4782]: W1124 12:10:17.908784 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef74c0aa_ac31_49b1_861d_258fe0a3ddff.slice/crio-0c97ef95bc76dbd5f2c13fc7e6eb1d03a2b693c73410fb60d238dc2814b7c014 WatchSource:0}: Error finding container 0c97ef95bc76dbd5f2c13fc7e6eb1d03a2b693c73410fb60d238dc2814b7c014: Status 404 returned error can't find the container with id 0c97ef95bc76dbd5f2c13fc7e6eb1d03a2b693c73410fb60d238dc2814b7c014 Nov 24 12:10:17 crc kubenswrapper[4782]: I1124 12:10:17.909876 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x"] Nov 24 12:10:17 crc kubenswrapper[4782]: W1124 12:10:17.915144 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec2d6fc2_5418_4263_a351_0422b2d5068d.slice/crio-9bb76f64de7f7cee25b829df308fbc2e9db6071f24b7aea7d6740648175af144 WatchSource:0}: Error finding container 9bb76f64de7f7cee25b829df308fbc2e9db6071f24b7aea7d6740648175af144: Status 404 returned error can't find the container with id 9bb76f64de7f7cee25b829df308fbc2e9db6071f24b7aea7d6740648175af144 Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.171071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" event={"ID":"6b6efe11-117c-42a3-baa5-b43b07557e43","Type":"ContainerStarted","Data":"3b28c319625dc01ac0c6b680e1e4f6f7a4277f6f2365d9d6cc782a3a359ca64b"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.172267 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" event={"ID":"9e50a599-1a70-46c9-94a1-d3148778888d","Type":"ContainerStarted","Data":"8d95d7f807e25c6ce7ba1a03f0140d16aafe8a6f6a6bec25710bff603464506c"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.176485 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" event={"ID":"34e6f50e-248f-4ef3-a145-83ccb7616d0d","Type":"ContainerStarted","Data":"46ce4f48666920da4a20eb971d8388b30ae5d4a4edeb29afb371732b9f323d4c"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.214901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.214945 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.215072 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.215140 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs podName:55628383-51b4-4c77-ac10-476769165984 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:20.215123233 +0000 UTC m=+869.458957002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs") pod "openstack-operator-controller-manager-689b7ddfcc-9brt2" (UID: "55628383-51b4-4c77-ac10-476769165984") : secret "webhook-server-cert" not found Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.219590 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" event={"ID":"5ace8cad-a0d4-4ba1-99f8-a097edd76a74","Type":"ContainerStarted","Data":"39820c8a95f323c4fc37c88915793e63b9f0b842a15daa3abc283f1e23702b76"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.225662 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-metrics-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.244799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" event={"ID":"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c","Type":"ContainerStarted","Data":"538db33753bb9fef959385d939a05b92941f693ef1042d9436a98afc13f16b59"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.252675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" event={"ID":"a09a3b55-484e-461d-9f95-1e3279b323c5","Type":"ContainerStarted","Data":"32499e9312c1f516c0730898d128cce0ac610949acd48484ba40c2eddedfa4ce"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.275422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-clxm4"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.278948 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.297914 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" event={"ID":"e6982d2e-f7d3-4374-bc66-7949d3bcc062","Type":"ContainerStarted","Data":"3c13f80a7aa427c88ee79685b453818877ac0ab0913e3a32bd130533c2bcd80c"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.298589 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.312193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" event={"ID":"ef74c0aa-ac31-49b1-861d-258fe0a3ddff","Type":"ContainerStarted","Data":"0c97ef95bc76dbd5f2c13fc7e6eb1d03a2b693c73410fb60d238dc2814b7c014"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.315066 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.323091 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.323128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" event={"ID":"2ae11a51-1628-454f-8b78-77e9aaa2691b","Type":"ContainerStarted","Data":"067d76d0f831c0aa397ca93501c0dd83742947d08f64913dc825abcbf2e76ac7"} Nov 24 12:10:18 crc kubenswrapper[4782]: W1124 12:10:18.323730 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f8b3ed3_fba7_4a0e_8245_f822c548082e.slice/crio-27165d8a15c657ec723a6d30e194c85003d1f4cce745f17c87af255e11ae8b9d WatchSource:0}: Error finding container 27165d8a15c657ec723a6d30e194c85003d1f4cce745f17c87af255e11ae8b9d: Status 404 returned error can't find the container with id 27165d8a15c657ec723a6d30e194c85003d1f4cce745f17c87af255e11ae8b9d Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.326140 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" event={"ID":"45393058-140b-48ea-9691-9bbe0740342b","Type":"ContainerStarted","Data":"f36b9f7db343e40d35b08aa11770d41dc4515dacb526871d6eb5ce7d6ef8ee51"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.327351 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" event={"ID":"ec2d6fc2-5418-4263-a351-0422b2d5068d","Type":"ContainerStarted","Data":"9bb76f64de7f7cee25b829df308fbc2e9db6071f24b7aea7d6740648175af144"} Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.332443 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.335248 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f"] Nov 24 12:10:18 crc kubenswrapper[4782]: I1124 12:10:18.342868 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r"] Nov 24 12:10:18 crc kubenswrapper[4782]: W1124 12:10:18.362481 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9dcd8ef_dbbf_43dc_97a0_e77d942ff589.slice/crio-e0b28fa3bff07766994601d280fa3eaf1c8bf0e6a8b2ef941acc7ffbe2a7c497 WatchSource:0}: Error finding container e0b28fa3bff07766994601d280fa3eaf1c8bf0e6a8b2ef941acc7ffbe2a7c497: Status 404 returned error can't find the container with id e0b28fa3bff07766994601d280fa3eaf1c8bf0e6a8b2ef941acc7ffbe2a7c497 Nov 24 12:10:18 crc kubenswrapper[4782]: W1124 12:10:18.364700 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf59f76fb_e7fa_4c9f_aec2_9af6e6aac15a.slice/crio-dff09c18650e0505041a31606ac53891ea093305d9778d014758d6e2e5a899da WatchSource:0}: Error finding container dff09c18650e0505041a31606ac53891ea093305d9778d014758d6e2e5a899da: Status 404 returned error can't find the container with id dff09c18650e0505041a31606ac53891ea093305d9778d014758d6e2e5a899da Nov 24 12:10:18 crc kubenswrapper[4782]: W1124 12:10:18.366204 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f8d31c_392e_468d_9a86_b5a482dbc6fb.slice/crio-7e300db4e9cacbaa7042be460c968e64ee7c0c8e72c54a55ff87fc59c51e89c3 WatchSource:0}: Error finding container 7e300db4e9cacbaa7042be460c968e64ee7c0c8e72c54a55ff87fc59c51e89c3: Status 404 returned error can't find the container with id 7e300db4e9cacbaa7042be460c968e64ee7c0c8e72c54a55ff87fc59c51e89c3 Nov 24 12:10:18 crc kubenswrapper[4782]: W1124 12:10:18.368738 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60a3fcad_0c5a_4be2_b89b_4d143d3a8e62.slice/crio-eeac210bdefefbca76c1a98561ec49512a3acc475ca7bb75708ab1fbb0e485fa WatchSource:0}: Error finding container eeac210bdefefbca76c1a98561ec49512a3acc475ca7bb75708ab1fbb0e485fa: Status 404 returned error can't find the container with id eeac210bdefefbca76c1a98561ec49512a3acc475ca7bb75708ab1fbb0e485fa Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.371121 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7lrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-k6n5f_openstack-operators(60a3fcad-0c5a-4be2-b89b-4d143d3a8e62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.371948 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-774f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-8xf2r_openstack-operators(a0f8d31c-392e-468d-9a86-b5a482dbc6fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.374250 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-774f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-8xf2r_openstack-operators(a0f8d31c-392e-468d-9a86-b5a482dbc6fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.375454 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" podUID="a0f8d31c-392e-468d-9a86-b5a482dbc6fb" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.375901 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7lrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-k6n5f_openstack-operators(60a3fcad-0c5a-4be2-b89b-4d143d3a8e62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.377188 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" podUID="60a3fcad-0c5a-4be2-b89b-4d143d3a8e62" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.381583 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbpq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-clxm4_openstack-operators(19e8c85c-675d-433f-8346-878034f14d24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.385304 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbpq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-clxm4_openstack-operators(19e8c85c-675d-433f-8346-878034f14d24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.387652 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" podUID="19e8c85c-675d-433f-8346-878034f14d24" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.429195 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wnzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-n6cqm_openstack-operators(76bc751e-4645-4cb1-bdfe-7e3c6732505b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.434938 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wnzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-n6cqm_openstack-operators(76bc751e-4645-4cb1-bdfe-7e3c6732505b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.437331 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" podUID="76bc751e-4645-4cb1-bdfe-7e3c6732505b" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.444659 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hf2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-fjggr_openstack-operators(61b6c96b-b73c-47b5-8e05-988870f4587f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.446640 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hf2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-fjggr_openstack-operators(61b6c96b-b73c-47b5-8e05-988870f4587f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:10:18 crc kubenswrapper[4782]: E1124 12:10:18.447763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" podUID="61b6c96b-b73c-47b5-8e05-988870f4587f" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.340681 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" event={"ID":"5f8b3ed3-fba7-4a0e-8245-f822c548082e","Type":"ContainerStarted","Data":"27165d8a15c657ec723a6d30e194c85003d1f4cce745f17c87af255e11ae8b9d"} Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.350012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" event={"ID":"19e8c85c-675d-433f-8346-878034f14d24","Type":"ContainerStarted","Data":"dbda8e50efc95c8b7b4612389276b250fc35d435085d96609fe536e04469b25e"} Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.357989 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" event={"ID":"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589","Type":"ContainerStarted","Data":"e0b28fa3bff07766994601d280fa3eaf1c8bf0e6a8b2ef941acc7ffbe2a7c497"} Nov 24 12:10:19 crc kubenswrapper[4782]: E1124 12:10:19.360235 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" podUID="19e8c85c-675d-433f-8346-878034f14d24" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.368718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" event={"ID":"f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a","Type":"ContainerStarted","Data":"dff09c18650e0505041a31606ac53891ea093305d9778d014758d6e2e5a899da"} Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.370605 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" event={"ID":"a0f8d31c-392e-468d-9a86-b5a482dbc6fb","Type":"ContainerStarted","Data":"7e300db4e9cacbaa7042be460c968e64ee7c0c8e72c54a55ff87fc59c51e89c3"} Nov 24 12:10:19 crc kubenswrapper[4782]: E1124 12:10:19.373916 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" podUID="a0f8d31c-392e-468d-9a86-b5a482dbc6fb" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.376781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" event={"ID":"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62","Type":"ContainerStarted","Data":"eeac210bdefefbca76c1a98561ec49512a3acc475ca7bb75708ab1fbb0e485fa"} Nov 24 12:10:19 crc kubenswrapper[4782]: E1124 12:10:19.381450 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" podUID="60a3fcad-0c5a-4be2-b89b-4d143d3a8e62" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.384572 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" event={"ID":"61b6c96b-b73c-47b5-8e05-988870f4587f","Type":"ContainerStarted","Data":"ff05f0fd013f0ac6ec870ef37b1436f9dfe28bd0930348fff05ee02d65ddee83"} Nov 24 12:10:19 crc kubenswrapper[4782]: E1124 12:10:19.394060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" podUID="61b6c96b-b73c-47b5-8e05-988870f4587f" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.408534 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" event={"ID":"76bc751e-4645-4cb1-bdfe-7e3c6732505b","Type":"ContainerStarted","Data":"bf2a19b13140f1b3601ec874d9ceb9270a4dafe38846c41106d3191ebe5ee161"} Nov 24 12:10:19 crc kubenswrapper[4782]: E1124 12:10:19.411809 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" podUID="76bc751e-4645-4cb1-bdfe-7e3c6732505b" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.743063 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.769762 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf25e22-396d-4c6d-9585-566ffc0d0092-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-6jvns\" (UID: \"ecf25e22-396d-4c6d-9585-566ffc0d0092\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:19 crc kubenswrapper[4782]: I1124 12:10:19.989205 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:20 crc kubenswrapper[4782]: I1124 12:10:20.248882 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:20 crc kubenswrapper[4782]: I1124 12:10:20.284046 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55628383-51b4-4c77-ac10-476769165984-webhook-certs\") pod \"openstack-operator-controller-manager-689b7ddfcc-9brt2\" (UID: \"55628383-51b4-4c77-ac10-476769165984\") " pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:20 crc kubenswrapper[4782]: E1124 12:10:20.426045 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" podUID="a0f8d31c-392e-468d-9a86-b5a482dbc6fb" Nov 24 12:10:20 crc kubenswrapper[4782]: E1124 12:10:20.439433 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" podUID="19e8c85c-675d-433f-8346-878034f14d24" Nov 24 12:10:20 crc kubenswrapper[4782]: E1124 12:10:20.439578 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" podUID="61b6c96b-b73c-47b5-8e05-988870f4587f" Nov 24 12:10:20 crc kubenswrapper[4782]: E1124 12:10:20.439661 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" podUID="76bc751e-4645-4cb1-bdfe-7e3c6732505b" Nov 24 12:10:20 crc kubenswrapper[4782]: E1124 12:10:20.439750 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" podUID="60a3fcad-0c5a-4be2-b89b-4d143d3a8e62" Nov 24 12:10:20 crc kubenswrapper[4782]: I1124 12:10:20.519936 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:30 crc kubenswrapper[4782]: I1124 12:10:30.410859 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:10:30 crc kubenswrapper[4782]: I1124 12:10:30.412525 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:10:30 crc kubenswrapper[4782]: E1124 12:10:30.892331 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991" Nov 24 12:10:30 crc kubenswrapper[4782]: E1124 12:10:30.892540 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlcbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68b95954c9-kmhd6_openstack-operators(ba20509b-c083-42f1-bf39-be2ed4a463f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:31 crc kubenswrapper[4782]: E1124 12:10:31.500261 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 24 12:10:31 crc kubenswrapper[4782]: E1124 12:10:31.500469 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8x88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-whlt5_openstack-operators(6b6efe11-117c-42a3-baa5-b43b07557e43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:34 crc kubenswrapper[4782]: E1124 12:10:34.938922 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 12:10:34 crc kubenswrapper[4782]: E1124 12:10:34.940329 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7drz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-wmsss_openstack-operators(2ae11a51-1628-454f-8b78-77e9aaa2691b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:35 crc kubenswrapper[4782]: E1124 12:10:35.367182 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 24 12:10:35 crc kubenswrapper[4782]: E1124 12:10:35.367643 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bzlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-57dk2_openstack-operators(34e6f50e-248f-4ef3-a145-83ccb7616d0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:36 crc kubenswrapper[4782]: E1124 12:10:36.516491 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 12:10:36 crc kubenswrapper[4782]: E1124 12:10:36.516670 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47fw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-ctr6x_openstack-operators(ef74c0aa-ac31-49b1-861d-258fe0a3ddff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:37 crc kubenswrapper[4782]: E1124 12:10:37.848189 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 24 12:10:37 crc kubenswrapper[4782]: E1124 12:10:37.848536 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brk46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-7pb8k_openstack-operators(a9dcd8ef-dbbf-43dc-97a0-e77d942ff589): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:40 crc kubenswrapper[4782]: E1124 12:10:40.403951 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 24 12:10:40 crc kubenswrapper[4782]: E1124 12:10:40.404422 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cb24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d695c9b56-czqnv_openstack-operators(5ace8cad-a0d4-4ba1-99f8-a097edd76a74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:40 crc kubenswrapper[4782]: I1124 12:10:40.787663 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns"] Nov 24 12:10:42 crc kubenswrapper[4782]: E1124 12:10:42.477738 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 24 12:10:42 crc kubenswrapper[4782]: E1124 12:10:42.478172 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq7nn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4ltpx_openstack-operators(f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:42 crc kubenswrapper[4782]: E1124 12:10:42.479457 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" podUID="f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a" Nov 24 12:10:42 crc kubenswrapper[4782]: E1124 12:10:42.553019 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" podUID="f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a" Nov 24 12:10:43 crc kubenswrapper[4782]: E1124 12:10:43.755713 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/keystone-operator:187a5fd47d753918537ece69a3fa60c11ac88808" Nov 24 12:10:43 crc kubenswrapper[4782]: E1124 12:10:43.755987 4782 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/keystone-operator:187a5fd47d753918537ece69a3fa60c11ac88808" Nov 24 12:10:43 crc kubenswrapper[4782]: E1124 12:10:43.756132 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.94:5001/openstack-k8s-operators/keystone-operator:187a5fd47d753918537ece69a3fa60c11ac88808,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7p2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b854ddf99-pb2wn_openstack-operators(7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:10:44 crc kubenswrapper[4782]: I1124 12:10:44.568015 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" event={"ID":"ecf25e22-396d-4c6d-9585-566ffc0d0092","Type":"ContainerStarted","Data":"acc706c7e5ad9f3d6ced6c0bf5722a9aaf6f0f3f255b5386325ae29f4f2b2335"} Nov 24 12:10:44 crc kubenswrapper[4782]: I1124 12:10:44.593120 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2"] Nov 24 12:10:45 crc kubenswrapper[4782]: I1124 12:10:45.599266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" event={"ID":"9adec34d-0e3d-4f65-80b2-4ba1c0731be4","Type":"ContainerStarted","Data":"f5b19a37526df4962f5697f360d2db1bbd09c8f9899b3f261d84d7e01f1b4f58"} Nov 24 12:10:45 crc kubenswrapper[4782]: I1124 12:10:45.604447 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" event={"ID":"55628383-51b4-4c77-ac10-476769165984","Type":"ContainerStarted","Data":"0edfbd2dd2d48d66bf596285cf287c95bea4c63e23025fed2f76a27804e5196c"} Nov 24 12:10:45 crc kubenswrapper[4782]: I1124 12:10:45.630071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" event={"ID":"ec2d6fc2-5418-4263-a351-0422b2d5068d","Type":"ContainerStarted","Data":"2ad16520b182431fe7e1bfda202da82c8bdabac41efc4c9c3e212463e612efa0"} Nov 24 12:10:45 crc kubenswrapper[4782]: I1124 12:10:45.650608 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" event={"ID":"a09a3b55-484e-461d-9f95-1e3279b323c5","Type":"ContainerStarted","Data":"b20ac3ed146678a37e1490a285b36b719da4e73050ed7a413f349a9116962831"} Nov 24 12:10:45 crc kubenswrapper[4782]: I1124 12:10:45.667790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" event={"ID":"5f8b3ed3-fba7-4a0e-8245-f822c548082e","Type":"ContainerStarted","Data":"b474f343175c08005597d8f9044d3bf463cad013a6d9dedce74797fd4b479078"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.677952 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" event={"ID":"45393058-140b-48ea-9691-9bbe0740342b","Type":"ContainerStarted","Data":"15d9103d712d8bb770e1e7881258425dc5674e01180aa5645fe84882e2cc31b6"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.680018 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" event={"ID":"61b6c96b-b73c-47b5-8e05-988870f4587f","Type":"ContainerStarted","Data":"4153c1bbff4b251ed352509d01437e7238a61bf1643f7aee25d71939af149787"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.683004 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" event={"ID":"e6982d2e-f7d3-4374-bc66-7949d3bcc062","Type":"ContainerStarted","Data":"363dd35e79725c9ac9b4446fb1704c0837f4a072a2a7785078c28bdd9cac714f"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.684184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" event={"ID":"9e50a599-1a70-46c9-94a1-d3148778888d","Type":"ContainerStarted","Data":"5ebd427d8ce330a5226461441f2fc0b35dc3df4da95906dd310e6913aefe4d70"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.686123 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" event={"ID":"55628383-51b4-4c77-ac10-476769165984","Type":"ContainerStarted","Data":"64a3927dd8a5ff149e6f6d1523377b21096a24ba2c11d52e274defb800b051e6"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.686261 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.689775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" event={"ID":"a0f8d31c-392e-468d-9a86-b5a482dbc6fb","Type":"ContainerStarted","Data":"02a498f2b53a128e97e044ec7b1529e027333381e4653aa3ac637cdcede94456"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.701691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" event={"ID":"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62","Type":"ContainerStarted","Data":"269fd70a5d4183a3489e3e83e919ba40aa3ea21c2771ee681c0c1cf5de3cf482"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.703711 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" event={"ID":"76bc751e-4645-4cb1-bdfe-7e3c6732505b","Type":"ContainerStarted","Data":"688c0dd1db1bd7b091452fab2a0dccb1ca77ad193d154e818ecdcd62cb34e5aa"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.707000 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" event={"ID":"19e8c85c-675d-433f-8346-878034f14d24","Type":"ContainerStarted","Data":"aafcbc81545540cf92b691ca8ef8ae288ddbbf227a2eed2e3f3c10271df149b4"} Nov 24 12:10:46 crc kubenswrapper[4782]: I1124 12:10:46.735264 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" podStartSLOduration=30.735245911 podStartE2EDuration="30.735245911s" podCreationTimestamp="2025-11-24 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:10:46.73235848 +0000 UTC m=+895.976192259" watchObservedRunningTime="2025-11-24 12:10:46.735245911 +0000 UTC m=+895.979079690" Nov 24 12:10:50 crc kubenswrapper[4782]: I1124 12:10:50.526738 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-689b7ddfcc-9brt2" Nov 24 12:10:55 crc kubenswrapper[4782]: I1124 12:10:55.765188 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" event={"ID":"60a3fcad-0c5a-4be2-b89b-4d143d3a8e62","Type":"ContainerStarted","Data":"ab60e9024f0027210de07f567d0a45acf15cb42b178db620a998a82192cf3490"} Nov 24 12:10:55 crc kubenswrapper[4782]: I1124 12:10:55.766413 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:55 crc kubenswrapper[4782]: I1124 12:10:55.768899 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" Nov 24 12:10:55 crc kubenswrapper[4782]: I1124 12:10:55.772113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" event={"ID":"ecf25e22-396d-4c6d-9585-566ffc0d0092","Type":"ContainerStarted","Data":"fa7b47644b9a52ea71fad7d6563c335adf7db51a8e7ecb4d39b66283fa08298a"} Nov 24 12:10:55 crc kubenswrapper[4782]: I1124 12:10:55.786705 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k6n5f" podStartSLOduration=3.670245345 podStartE2EDuration="40.786679468s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.370981372 +0000 UTC m=+867.614815141" lastFinishedPulling="2025-11-24 12:10:55.487415445 +0000 UTC m=+904.731249264" observedRunningTime="2025-11-24 12:10:55.781655528 +0000 UTC m=+905.025489297" watchObservedRunningTime="2025-11-24 12:10:55.786679468 +0000 UTC m=+905.030513237" Nov 24 12:10:55 crc kubenswrapper[4782]: E1124 12:10:55.894670 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" podUID="5ace8cad-a0d4-4ba1-99f8-a097edd76a74" Nov 24 12:10:56 crc kubenswrapper[4782]: E1124 12:10:56.099313 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" podUID="34e6f50e-248f-4ef3-a145-83ccb7616d0d" Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.781950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" event={"ID":"ecf25e22-396d-4c6d-9585-566ffc0d0092","Type":"ContainerStarted","Data":"153b7f661876bc63c35dd505029bd95c00249ffecf5a1823245f10950a7349dd"} Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.782161 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.783845 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" event={"ID":"a09a3b55-484e-461d-9f95-1e3279b323c5","Type":"ContainerStarted","Data":"47cff2229d5f0cb3d4c37f4ef76c3a0e2eaf617db39038ecac50e848a474e33e"} Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.784061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.786574 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" event={"ID":"34e6f50e-248f-4ef3-a145-83ccb7616d0d","Type":"ContainerStarted","Data":"beb93f1df5fc8ac69729b1ba0693b63e881e6de5ed2e6c5c57d4dd8bcee0dd18"} Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.788477 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.789453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" event={"ID":"5ace8cad-a0d4-4ba1-99f8-a097edd76a74","Type":"ContainerStarted","Data":"8b45ea007865d371c3c5cb4183e9ebe1596df5cda715809bc22f6dbfcfb1825b"} Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.820362 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" podStartSLOduration=31.410340668 podStartE2EDuration="41.820339052s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:43.68049913 +0000 UTC m=+892.924332899" lastFinishedPulling="2025-11-24 12:10:54.090497514 +0000 UTC m=+903.334331283" observedRunningTime="2025-11-24 12:10:56.813472081 +0000 UTC m=+906.057305850" watchObservedRunningTime="2025-11-24 12:10:56.820339052 +0000 UTC m=+906.064172841" Nov 24 12:10:56 crc kubenswrapper[4782]: I1124 12:10:56.861432 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-kh9k5" podStartSLOduration=3.632647884 podStartE2EDuration="41.86141277s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.669024751 +0000 UTC m=+866.912858520" lastFinishedPulling="2025-11-24 12:10:55.897789637 +0000 UTC m=+905.141623406" observedRunningTime="2025-11-24 12:10:56.853667935 +0000 UTC m=+906.097501704" watchObservedRunningTime="2025-11-24 12:10:56.86141277 +0000 UTC m=+906.105246559" Nov 24 12:10:57 crc kubenswrapper[4782]: E1124 12:10:57.667969 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" podUID="ba20509b-c083-42f1-bf39-be2ed4a463f7" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.807797 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" event={"ID":"76bc751e-4645-4cb1-bdfe-7e3c6732505b","Type":"ContainerStarted","Data":"863a7abb50a9240e58435f32c29a68dad42f5c6b49c8cd4d3e2e4c377d350302"} Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.808771 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.811689 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.815710 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" event={"ID":"ba20509b-c083-42f1-bf39-be2ed4a463f7","Type":"ContainerStarted","Data":"6a0d262e3cdc1609c4db9042aebca8a79e2be8559bba0c4eef7014f900f2dccd"} Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.817601 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.819761 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" event={"ID":"ec2d6fc2-5418-4263-a351-0422b2d5068d","Type":"ContainerStarted","Data":"9ef0bfdaa05a08c52401fbb5c9b7fe64ec484c50ca14341a7df86c087afaa6f5"} Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.820305 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.823108 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.831623 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-n6cqm" podStartSLOduration=3.163983436 podStartE2EDuration="41.831605384s" podCreationTimestamp="2025-11-24 12:10:16 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.429080492 +0000 UTC m=+867.672914261" lastFinishedPulling="2025-11-24 12:10:57.09670244 +0000 UTC m=+906.340536209" observedRunningTime="2025-11-24 12:10:57.826834722 +0000 UTC m=+907.070668491" watchObservedRunningTime="2025-11-24 12:10:57.831605384 +0000 UTC m=+907.075439163" Nov 24 12:10:57 crc kubenswrapper[4782]: I1124 12:10:57.853629 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2hr2j" podStartSLOduration=3.188410983 podStartE2EDuration="42.853601254s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.930604139 +0000 UTC m=+867.174437908" lastFinishedPulling="2025-11-24 12:10:57.59579441 +0000 UTC m=+906.839628179" observedRunningTime="2025-11-24 12:10:57.84444739 +0000 UTC m=+907.088281189" watchObservedRunningTime="2025-11-24 12:10:57.853601254 +0000 UTC m=+907.097435033" Nov 24 12:10:58 crc kubenswrapper[4782]: E1124 12:10:58.156875 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" podUID="a9dcd8ef-dbbf-43dc-97a0-e77d942ff589" Nov 24 12:10:58 crc kubenswrapper[4782]: E1124 12:10:58.218689 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" podUID="2ae11a51-1628-454f-8b78-77e9aaa2691b" Nov 24 12:10:58 crc kubenswrapper[4782]: E1124 12:10:58.274422 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" podUID="6b6efe11-117c-42a3-baa5-b43b07557e43" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.829610 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" event={"ID":"34e6f50e-248f-4ef3-a145-83ccb7616d0d","Type":"ContainerStarted","Data":"123092c2c26484c0d3895647893689ebeda7f99c223db650e18d3f682668ea17"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.830110 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.832952 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" event={"ID":"e6982d2e-f7d3-4374-bc66-7949d3bcc062","Type":"ContainerStarted","Data":"b525f4f3ccce105532d5f980025bd8d284aa96207aabfc0e4d0761d563dc9719"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.833852 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.841742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" event={"ID":"2ae11a51-1628-454f-8b78-77e9aaa2691b","Type":"ContainerStarted","Data":"4a35d8336d825bc69f67b3ac630a84757ea53e2dbd4d5872b665c93ec75d8f2a"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.843860 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.857092 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" podStartSLOduration=3.337807703 podStartE2EDuration="43.857060171s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.680978332 +0000 UTC m=+866.924812101" lastFinishedPulling="2025-11-24 12:10:58.2002308 +0000 UTC m=+907.444064569" observedRunningTime="2025-11-24 12:10:58.851465896 +0000 UTC m=+908.095299665" watchObservedRunningTime="2025-11-24 12:10:58.857060171 +0000 UTC m=+908.100893940" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.859660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" event={"ID":"9adec34d-0e3d-4f65-80b2-4ba1c0731be4","Type":"ContainerStarted","Data":"9570c4676aa2782f0c0af9e5f891a470eba079b5dd2a841e9f532bdd4bf248a3"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.860283 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.862522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" event={"ID":"19e8c85c-675d-433f-8346-878034f14d24","Type":"ContainerStarted","Data":"bc63e12bada0841856310e73e42e610aedb8510e6bb2e1e80f48a09c142c5079"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.863151 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.867229 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.871071 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.880171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" event={"ID":"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589","Type":"ContainerStarted","Data":"e2238287fab7a0e4eb23fe403a8692ff733c8059ed41a146b61ff325946c74c9"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.891023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" event={"ID":"6b6efe11-117c-42a3-baa5-b43b07557e43","Type":"ContainerStarted","Data":"20aa1c19feb0bac5361ecddcde2b22c4bfdf7364eefbc38576cb32160c5ae2fa"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.904692 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" event={"ID":"9e50a599-1a70-46c9-94a1-d3148778888d","Type":"ContainerStarted","Data":"9a095aa2eac3e75014ce223e2bf2e1ad68861792a518981948defd779a6b24e2"} Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.904758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.920667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.922053 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-nqd5j" podStartSLOduration=3.408159456 podStartE2EDuration="43.922038295s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.628267431 +0000 UTC m=+866.872101200" lastFinishedPulling="2025-11-24 12:10:58.14214627 +0000 UTC m=+907.385980039" observedRunningTime="2025-11-24 12:10:58.908149113 +0000 UTC m=+908.151982882" watchObservedRunningTime="2025-11-24 12:10:58.922038295 +0000 UTC m=+908.165872064" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.962955 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-clxm4" podStartSLOduration=3.414163055 podStartE2EDuration="42.96293324s" podCreationTimestamp="2025-11-24 12:10:16 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.381482873 +0000 UTC m=+867.625316642" lastFinishedPulling="2025-11-24 12:10:57.930253058 +0000 UTC m=+907.174086827" observedRunningTime="2025-11-24 12:10:58.942496743 +0000 UTC m=+908.186330512" watchObservedRunningTime="2025-11-24 12:10:58.96293324 +0000 UTC m=+908.206767009" Nov 24 12:10:58 crc kubenswrapper[4782]: I1124 12:10:58.974599 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-kr5jc" podStartSLOduration=2.915833804 podStartE2EDuration="43.974575612s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.066077322 +0000 UTC m=+866.309911091" lastFinishedPulling="2025-11-24 12:10:58.12481914 +0000 UTC m=+907.368652899" observedRunningTime="2025-11-24 12:10:58.968368075 +0000 UTC m=+908.212201854" watchObservedRunningTime="2025-11-24 12:10:58.974575612 +0000 UTC m=+908.218409401" Nov 24 12:10:59 crc kubenswrapper[4782]: E1124 12:10:59.015877 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" podUID="7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.063063 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tmp5g" podStartSLOduration=4.172998078 podStartE2EDuration="44.06304111s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.849958424 +0000 UTC m=+867.093792193" lastFinishedPulling="2025-11-24 12:10:57.740001456 +0000 UTC m=+906.983835225" observedRunningTime="2025-11-24 12:10:59.048642734 +0000 UTC m=+908.292476503" watchObservedRunningTime="2025-11-24 12:10:59.06304111 +0000 UTC m=+908.306874879" Nov 24 12:10:59 crc kubenswrapper[4782]: E1124 12:10:59.063642 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" podUID="ef74c0aa-ac31-49b1-861d-258fe0a3ddff" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.911689 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" event={"ID":"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c","Type":"ContainerStarted","Data":"1badc5f7e7dc9c200ab167c8f9e463362732d6f092a01088db6d473d3df10943"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.915045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" event={"ID":"5ace8cad-a0d4-4ba1-99f8-a097edd76a74","Type":"ContainerStarted","Data":"bfbbb6bbade8ac3fe9808db2e740c80d24d0b7e75092c1874325e30d27af47df"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.915489 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.921276 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" event={"ID":"5f8b3ed3-fba7-4a0e-8245-f822c548082e","Type":"ContainerStarted","Data":"8490528a0c8b982b12a3b21667d2742f52dc8e863e8a08422c5c4f8fa8a845a6"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.921633 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.923247 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.924164 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" event={"ID":"ef74c0aa-ac31-49b1-861d-258fe0a3ddff","Type":"ContainerStarted","Data":"fc447028a94115b7bbed433b9d82083d89e7985f6884dcacac01c07f6d7212e4"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.934193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" event={"ID":"2ae11a51-1628-454f-8b78-77e9aaa2691b","Type":"ContainerStarted","Data":"17cbc5ec6c2607e5842e2c579cde69cdb6629bf51f73f9c5c6f74a9f0ed6c47d"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.934408 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.936734 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" event={"ID":"a9dcd8ef-dbbf-43dc-97a0-e77d942ff589","Type":"ContainerStarted","Data":"1262edb48e8058446ee0207f214d8efb6f10c27e2806f63704d164afde0480ce"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.937572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.945307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" event={"ID":"6b6efe11-117c-42a3-baa5-b43b07557e43","Type":"ContainerStarted","Data":"1ef1348353eb41d143df21b7e40617a743a5c78f6e92066280c49fc4721ca407"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.967730 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.970236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" event={"ID":"f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a","Type":"ContainerStarted","Data":"1af3e856e48a3d4291af27b178deab15bd5926ad43ea818d243141f91ae29707"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.971955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" event={"ID":"a0f8d31c-392e-468d-9a86-b5a482dbc6fb","Type":"ContainerStarted","Data":"fd70acd01b15914f1cf5b7c2d6b59297b5e816303094d55f39c56b7d12a02364"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.972244 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.976811 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.980351 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" event={"ID":"61b6c96b-b73c-47b5-8e05-988870f4587f","Type":"ContainerStarted","Data":"c21630c3c2ad151a30673dd9a2d64faa4c53a9e0f74e1928a361171d2bd78dd1"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.982307 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.986753 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" event={"ID":"ba20509b-c083-42f1-bf39-be2ed4a463f7","Type":"ContainerStarted","Data":"2e4af8f1769bf4189d698b490753632545d38032fb6a7e9f337c8fe70616aa2b"} Nov 24 12:10:59 crc kubenswrapper[4782]: I1124 12:10:59.986915 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.000215 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.005972 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-6jvns" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.035772 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" podStartSLOduration=3.574928806 podStartE2EDuration="45.03575581s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.897760798 +0000 UTC m=+867.141594567" lastFinishedPulling="2025-11-24 12:10:59.358587802 +0000 UTC m=+908.602421571" observedRunningTime="2025-11-24 12:11:00.027874939 +0000 UTC m=+909.271708738" watchObservedRunningTime="2025-11-24 12:11:00.03575581 +0000 UTC m=+909.279589579" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.063491 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" podStartSLOduration=3.816347264 podStartE2EDuration="45.063468942s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.380953488 +0000 UTC m=+867.624787257" lastFinishedPulling="2025-11-24 12:10:59.628075166 +0000 UTC m=+908.871908935" observedRunningTime="2025-11-24 12:11:00.059425914 +0000 UTC m=+909.303259703" watchObservedRunningTime="2025-11-24 12:11:00.063468942 +0000 UTC m=+909.307302721" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.089481 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" podStartSLOduration=3.413918775 podStartE2EDuration="45.089462038s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.681907698 +0000 UTC m=+866.925741467" lastFinishedPulling="2025-11-24 12:10:59.357450961 +0000 UTC m=+908.601284730" observedRunningTime="2025-11-24 12:11:00.085336648 +0000 UTC m=+909.329170417" watchObservedRunningTime="2025-11-24 12:11:00.089462038 +0000 UTC m=+909.333295807" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.108427 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pdvh7" podStartSLOduration=5.049642307 podStartE2EDuration="45.108409695s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.340312222 +0000 UTC m=+867.584145991" lastFinishedPulling="2025-11-24 12:10:58.39907961 +0000 UTC m=+907.642913379" observedRunningTime="2025-11-24 12:11:00.105696773 +0000 UTC m=+909.349530542" watchObservedRunningTime="2025-11-24 12:11:00.108409695 +0000 UTC m=+909.352243474" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.154631 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fjggr" podStartSLOduration=5.32234747 podStartE2EDuration="45.154618432s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.44453968 +0000 UTC m=+867.688373449" lastFinishedPulling="2025-11-24 12:10:58.276810642 +0000 UTC m=+907.520644411" observedRunningTime="2025-11-24 12:11:00.153332438 +0000 UTC m=+909.397166207" watchObservedRunningTime="2025-11-24 12:11:00.154618432 +0000 UTC m=+909.398452201" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.201642 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4ltpx" podStartSLOduration=3.243300487 podStartE2EDuration="43.201627401s" podCreationTimestamp="2025-11-24 12:10:17 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.368654958 +0000 UTC m=+867.612488717" lastFinishedPulling="2025-11-24 12:10:58.326981862 +0000 UTC m=+907.570815631" observedRunningTime="2025-11-24 12:11:00.195283811 +0000 UTC m=+909.439117600" watchObservedRunningTime="2025-11-24 12:11:00.201627401 +0000 UTC m=+909.445461160" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.259232 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" podStartSLOduration=3.8825222630000003 podStartE2EDuration="45.259214572s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.137854181 +0000 UTC m=+866.381687950" lastFinishedPulling="2025-11-24 12:10:58.51454649 +0000 UTC m=+907.758380259" observedRunningTime="2025-11-24 12:11:00.258794831 +0000 UTC m=+909.502628600" watchObservedRunningTime="2025-11-24 12:11:00.259214572 +0000 UTC m=+909.503048351" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.262542 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" podStartSLOduration=3.209804342 podStartE2EDuration="45.262529491s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.576998001 +0000 UTC m=+866.820831760" lastFinishedPulling="2025-11-24 12:10:59.62972315 +0000 UTC m=+908.873556909" observedRunningTime="2025-11-24 12:11:00.22551067 +0000 UTC m=+909.469344449" watchObservedRunningTime="2025-11-24 12:11:00.262529491 +0000 UTC m=+909.506363260" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.295583 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8xf2r" podStartSLOduration=3.524562065 podStartE2EDuration="44.295566565s" podCreationTimestamp="2025-11-24 12:10:16 +0000 UTC" firstStartedPulling="2025-11-24 12:10:18.371883567 +0000 UTC m=+867.615717336" lastFinishedPulling="2025-11-24 12:10:59.142888067 +0000 UTC m=+908.386721836" observedRunningTime="2025-11-24 12:11:00.292108413 +0000 UTC m=+909.535942182" watchObservedRunningTime="2025-11-24 12:11:00.295566565 +0000 UTC m=+909.539400324" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.410889 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.411221 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.992137 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" event={"ID":"ef74c0aa-ac31-49b1-861d-258fe0a3ddff","Type":"ContainerStarted","Data":"814d47fbd159a0e6c66adbcccbcf76f5d240114be29bc33ca327c565072fd08c"} Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.992885 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.994050 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" event={"ID":"45393058-140b-48ea-9691-9bbe0740342b","Type":"ContainerStarted","Data":"fc4b87f36105135af55b4dce20ecd4443c180599662d3fe836de443560d79b8c"} Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.994507 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.995975 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" event={"ID":"7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c","Type":"ContainerStarted","Data":"aa64f6b122efe3a0e8d5c1e6a35be0c38d399e32cf82e315a0d8a638fa083b46"} Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.996360 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:11:00 crc kubenswrapper[4782]: I1124 12:11:00.999939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" Nov 24 12:11:01 crc kubenswrapper[4782]: I1124 12:11:01.011939 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" podStartSLOduration=3.463530179 podStartE2EDuration="46.011918872s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.914464731 +0000 UTC m=+867.158298500" lastFinishedPulling="2025-11-24 12:11:00.462853424 +0000 UTC m=+909.706687193" observedRunningTime="2025-11-24 12:11:01.010698069 +0000 UTC m=+910.254531848" watchObservedRunningTime="2025-11-24 12:11:01.011918872 +0000 UTC m=+910.255752641" Nov 24 12:11:01 crc kubenswrapper[4782]: I1124 12:11:01.031989 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k5z2n" podStartSLOduration=3.159377035 podStartE2EDuration="46.031967899s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.667536379 +0000 UTC m=+866.911370148" lastFinishedPulling="2025-11-24 12:11:00.540127243 +0000 UTC m=+909.783961012" observedRunningTime="2025-11-24 12:11:01.028160977 +0000 UTC m=+910.271994756" watchObservedRunningTime="2025-11-24 12:11:01.031967899 +0000 UTC m=+910.275801668" Nov 24 12:11:01 crc kubenswrapper[4782]: I1124 12:11:01.065915 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" podStartSLOduration=3.6574041360000002 podStartE2EDuration="46.065899807s" podCreationTimestamp="2025-11-24 12:10:15 +0000 UTC" firstStartedPulling="2025-11-24 12:10:17.622296096 +0000 UTC m=+866.866129865" lastFinishedPulling="2025-11-24 12:11:00.030791767 +0000 UTC m=+909.274625536" observedRunningTime="2025-11-24 12:11:01.062516266 +0000 UTC m=+910.306350055" watchObservedRunningTime="2025-11-24 12:11:01.065899807 +0000 UTC m=+910.309733576" Nov 24 12:11:05 crc kubenswrapper[4782]: I1124 12:11:05.702662 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-czqnv" Nov 24 12:11:05 crc kubenswrapper[4782]: I1124 12:11:05.761857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-kmhd6" Nov 24 12:11:05 crc kubenswrapper[4782]: I1124 12:11:05.787991 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wmsss" Nov 24 12:11:05 crc kubenswrapper[4782]: I1124 12:11:05.874823 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-whlt5" Nov 24 12:11:05 crc kubenswrapper[4782]: I1124 12:11:05.918453 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b854ddf99-pb2wn" Nov 24 12:11:06 crc kubenswrapper[4782]: I1124 12:11:06.071397 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-ctr6x" Nov 24 12:11:06 crc kubenswrapper[4782]: I1124 12:11:06.166504 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-57dk2" Nov 24 12:11:06 crc kubenswrapper[4782]: I1124 12:11:06.390873 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7pb8k" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.122357 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.124619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.129338 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.129673 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.129923 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-f8gml" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.134647 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.155526 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.211178 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.216990 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.223172 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.230038 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.244962 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.245101 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg76\" (UniqueName: \"kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.346570 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.346920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg76\" (UniqueName: \"kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.346963 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7p62\" (UniqueName: \"kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.346986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.347005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.348597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.366965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg76\" (UniqueName: \"kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76\") pod \"dnsmasq-dns-675f4bcbfc-znq6j\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.448911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.449004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7p62\" (UniqueName: \"kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.449042 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.450182 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.450843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.453174 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.479023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7p62\" (UniqueName: \"kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62\") pod \"dnsmasq-dns-78dd6ddcc-vnz6r\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.533760 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:21 crc kubenswrapper[4782]: I1124 12:11:21.923859 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:21 crc kubenswrapper[4782]: W1124 12:11:21.927860 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44769636_e101_47e1_aaad_8ea51d7ef5f0.slice/crio-120049989f3a5a70c218d6d2f5ba1fc8222bf838374f8dbe45779f463fd2fb19 WatchSource:0}: Error finding container 120049989f3a5a70c218d6d2f5ba1fc8222bf838374f8dbe45779f463fd2fb19: Status 404 returned error can't find the container with id 120049989f3a5a70c218d6d2f5ba1fc8222bf838374f8dbe45779f463fd2fb19 Nov 24 12:11:22 crc kubenswrapper[4782]: I1124 12:11:22.019118 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:22 crc kubenswrapper[4782]: W1124 12:11:22.020646 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60fcad8_53c7_4a04_9fc9_3ed8189da17c.slice/crio-4f8c44a902240e6b3a00f8f8c8b14de948412b98971b64a81e382c2dd271e25d WatchSource:0}: Error finding container 4f8c44a902240e6b3a00f8f8c8b14de948412b98971b64a81e382c2dd271e25d: Status 404 returned error can't find the container with id 4f8c44a902240e6b3a00f8f8c8b14de948412b98971b64a81e382c2dd271e25d Nov 24 12:11:22 crc kubenswrapper[4782]: I1124 12:11:22.153484 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" event={"ID":"d60fcad8-53c7-4a04-9fc9-3ed8189da17c","Type":"ContainerStarted","Data":"4f8c44a902240e6b3a00f8f8c8b14de948412b98971b64a81e382c2dd271e25d"} Nov 24 12:11:22 crc kubenswrapper[4782]: I1124 12:11:22.155191 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" event={"ID":"44769636-e101-47e1-aaad-8ea51d7ef5f0","Type":"ContainerStarted","Data":"120049989f3a5a70c218d6d2f5ba1fc8222bf838374f8dbe45779f463fd2fb19"} Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.216046 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.287515 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.288680 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.328023 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.396265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.396358 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhcn\" (UniqueName: \"kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.396436 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.497668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.497765 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.497804 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhcn\" (UniqueName: \"kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.499015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.499025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.532129 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhcn\" (UniqueName: \"kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn\") pod \"dnsmasq-dns-666b6646f7-p8crc\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.595573 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.619737 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.626979 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.633201 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.645897 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.701980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.702065 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvkx\" (UniqueName: \"kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.702095 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.804047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.806851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.807302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvkx\" (UniqueName: \"kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.807922 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.808179 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.832094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvkx\" (UniqueName: \"kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx\") pod \"dnsmasq-dns-57d769cc4f-9xbk5\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:24 crc kubenswrapper[4782]: I1124 12:11:24.952912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.283647 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.415734 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.417131 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.420429 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.420725 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.420907 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dd7fl" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.421074 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.423982 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.424258 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.424341 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.431443 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531552 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fth46\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531589 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531634 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531675 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531721 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531773 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531796 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.531854 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.554094 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633704 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633756 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633806 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633843 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633876 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fth46\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633926 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.633989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.634022 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.634064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.634441 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.634479 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.634794 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.636108 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.637189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.637625 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.641356 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.642359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.648763 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.649683 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.650948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fth46\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.666945 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.744842 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.763890 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.765234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.771791 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.771813 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.772103 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.772207 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.772431 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.772820 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.773130 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2bbf4" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.780970 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.837877 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838059 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838170 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzfz\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838242 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838330 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838421 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.838462 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.939922 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940291 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940309 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940350 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940509 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940537 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940553 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.940571 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzfz\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.941247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.941362 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.941606 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.945421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.949745 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.952319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.954108 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.955939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.956693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.989122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:25 crc kubenswrapper[4782]: I1124 12:11:25.990245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzfz\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.009126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.104030 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.212413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" event={"ID":"05c76f00-86eb-4645-85cc-16264a012985","Type":"ContainerStarted","Data":"0900c697a98fedf576c7fae81345bcef1b8cc4c13f0d6f056b09d6fee58a96c0"} Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.217596 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" event={"ID":"32cceb35-3bf5-4e4b-b338-296eb56cf073","Type":"ContainerStarted","Data":"eb085a54f5b2b1a24389d82f7941a71566aa93aeec98cbd1b1fa667268da5463"} Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.384979 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:11:26 crc kubenswrapper[4782]: I1124 12:11:26.710685 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:11:26 crc kubenswrapper[4782]: W1124 12:11:26.713757 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod819def2d_6f25_42ca_91f6_6951b7b97549.slice/crio-e1491967029859c779d4c872d5fca7910891c19a55404d892fb7230343e75207 WatchSource:0}: Error finding container e1491967029859c779d4c872d5fca7910891c19a55404d892fb7230343e75207: Status 404 returned error can't find the container with id e1491967029859c779d4c872d5fca7910891c19a55404d892fb7230343e75207 Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.208024 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.209600 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.217407 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.217563 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t5trw" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.217652 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.218663 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.220826 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.260016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerStarted","Data":"e1491967029859c779d4c872d5fca7910891c19a55404d892fb7230343e75207"} Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.266339 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerStarted","Data":"14cd36d35e31d53754bffb89bc3e2bf5eed690da14099498027c4b8f74944e03"} Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.276678 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281780 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281817 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281883 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b33a2a59-697b-4973-b01d-5933d2319593-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281925 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-kolla-config\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281960 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p622q\" (UniqueName: \"kubernetes.io/projected/b33a2a59-697b-4973-b01d-5933d2319593-kube-api-access-p622q\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.281982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.282010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-config-data-default\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.385651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b33a2a59-697b-4973-b01d-5933d2319593-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.386025 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.386047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-kolla-config\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.386899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-kolla-config\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.387432 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.387502 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b33a2a59-697b-4973-b01d-5933d2319593-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.387598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p622q\" (UniqueName: \"kubernetes.io/projected/b33a2a59-697b-4973-b01d-5933d2319593-kube-api-access-p622q\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.387627 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.388031 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.392578 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-config-data-default\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.394430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b33a2a59-697b-4973-b01d-5933d2319593-config-data-default\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.394586 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.394602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.417916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p622q\" (UniqueName: \"kubernetes.io/projected/b33a2a59-697b-4973-b01d-5933d2319593-kube-api-access-p622q\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.425933 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.442515 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.452425 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33a2a59-697b-4973-b01d-5933d2319593-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b33a2a59-697b-4973-b01d-5933d2319593\") " pod="openstack/openstack-galera-0" Nov 24 12:11:27 crc kubenswrapper[4782]: I1124 12:11:27.565996 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.572977 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.644463 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.644577 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.648718 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.648983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nn6qk" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.649199 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.649456 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.812531 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.813436 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.816923 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.817233 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820193 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820227 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820287 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820323 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820383 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn5g5\" (UniqueName: \"kubernetes.io/projected/b66c75fd-ec79-4997-9e45-70865f612c8f-kube-api-access-fn5g5\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820437 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.820523 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.825626 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sf857" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn5g5\" (UniqueName: \"kubernetes.io/projected/b66c75fd-ec79-4997-9e45-70865f612c8f-kube-api-access-fn5g5\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922163 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fdx\" (UniqueName: \"kubernetes.io/projected/579fda47-7251-4722-b19c-eadbf6aaba21-kube-api-access-k2fdx\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922183 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-config-data\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922199 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922352 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922385 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922415 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-kolla-config\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922433 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922476 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922760 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.922916 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.924228 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.924641 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.927333 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b66c75fd-ec79-4997-9e45-70865f612c8f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.932633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.932791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66c75fd-ec79-4997-9e45-70865f612c8f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.955506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn5g5\" (UniqueName: \"kubernetes.io/projected/b66c75fd-ec79-4997-9e45-70865f612c8f-kube-api-access-fn5g5\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:28 crc kubenswrapper[4782]: I1124 12:11:28.980525 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b66c75fd-ec79-4997-9e45-70865f612c8f\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.023438 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.023787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-kolla-config\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.023827 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.023967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fdx\" (UniqueName: \"kubernetes.io/projected/579fda47-7251-4722-b19c-eadbf6aaba21-kube-api-access-k2fdx\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.023991 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-config-data\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.024786 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-config-data\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.025446 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/579fda47-7251-4722-b19c-eadbf6aaba21-kolla-config\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.027812 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.032185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579fda47-7251-4722-b19c-eadbf6aaba21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.057614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fdx\" (UniqueName: \"kubernetes.io/projected/579fda47-7251-4722-b19c-eadbf6aaba21-kube-api-access-k2fdx\") pod \"memcached-0\" (UID: \"579fda47-7251-4722-b19c-eadbf6aaba21\") " pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.129142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:11:29 crc kubenswrapper[4782]: I1124 12:11:29.275917 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:11:30 crc kubenswrapper[4782]: I1124 12:11:30.410461 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:11:30 crc kubenswrapper[4782]: I1124 12:11:30.410812 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:11:30 crc kubenswrapper[4782]: I1124 12:11:30.410865 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:11:30 crc kubenswrapper[4782]: I1124 12:11:30.411768 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:11:30 crc kubenswrapper[4782]: I1124 12:11:30.411833 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a" gracePeriod=600 Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.209876 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.210934 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.220160 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8svxd" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.267688 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.315283 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a" exitCode=0 Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.315583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a"} Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.315724 4782 scope.go:117] "RemoveContainer" containerID="0b0de42a9e31e2c0b3d63e2b240a6563edaaeabf4b832fd49516335de30ba2d0" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.360664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b46n\" (UniqueName: \"kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n\") pod \"kube-state-metrics-0\" (UID: \"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2\") " pod="openstack/kube-state-metrics-0" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.462447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b46n\" (UniqueName: \"kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n\") pod \"kube-state-metrics-0\" (UID: \"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2\") " pod="openstack/kube-state-metrics-0" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.487938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b46n\" (UniqueName: \"kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n\") pod \"kube-state-metrics-0\" (UID: \"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2\") " pod="openstack/kube-state-metrics-0" Nov 24 12:11:31 crc kubenswrapper[4782]: I1124 12:11:31.533038 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.401359 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m6c9b"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.402597 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.406707 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qczzx" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.406792 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.411945 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.414137 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7bqn5"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.415552 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.423918 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.447538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7bqn5"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510266 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52g64\" (UniqueName: \"kubernetes.io/projected/a62553ed-d73b-49c8-be06-e9ad0542d8da-kube-api-access-52g64\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6d4j\" (UniqueName: \"kubernetes.io/projected/4836b782-f203-42c9-95f7-58a33a861aa1-kube-api-access-h6d4j\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-log-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510473 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62553ed-d73b-49c8-be06-e9ad0542d8da-scripts\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510516 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-run\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-etc-ovs\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-log\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510663 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-lib\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510679 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-ovn-controller-tls-certs\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4836b782-f203-42c9-95f7-58a33a861aa1-scripts\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.510737 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-combined-ca-bundle\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.612716 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.612809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-combined-ca-bundle\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.612891 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52g64\" (UniqueName: \"kubernetes.io/projected/a62553ed-d73b-49c8-be06-e9ad0542d8da-kube-api-access-52g64\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.612936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6d4j\" (UniqueName: \"kubernetes.io/projected/4836b782-f203-42c9-95f7-58a33a861aa1-kube-api-access-h6d4j\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.612979 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-log-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62553ed-d73b-49c8-be06-e9ad0542d8da-scripts\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613117 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-run\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613187 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-etc-ovs\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613241 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-log\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-lib\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-ovn-controller-tls-certs\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4836b782-f203-42c9-95f7-58a33a861aa1-scripts\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-run\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613825 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-log\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.613917 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-run\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.614042 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-etc-ovs\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.614172 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4836b782-f203-42c9-95f7-58a33a861aa1-var-lib\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.614289 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a62553ed-d73b-49c8-be06-e9ad0542d8da-var-log-ovn\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.616524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62553ed-d73b-49c8-be06-e9ad0542d8da-scripts\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.616921 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4836b782-f203-42c9-95f7-58a33a861aa1-scripts\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.618938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-ovn-controller-tls-certs\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.621462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62553ed-d73b-49c8-be06-e9ad0542d8da-combined-ca-bundle\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.624647 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.625943 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.628903 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.629237 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.629429 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.629654 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.629805 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-x5vgl" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.638282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6d4j\" (UniqueName: \"kubernetes.io/projected/4836b782-f203-42c9-95f7-58a33a861aa1-kube-api-access-h6d4j\") pod \"ovn-controller-ovs-7bqn5\" (UID: \"4836b782-f203-42c9-95f7-58a33a861aa1\") " pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.642810 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52g64\" (UniqueName: \"kubernetes.io/projected/a62553ed-d73b-49c8-be06-e9ad0542d8da-kube-api-access-52g64\") pod \"ovn-controller-m6c9b\" (UID: \"a62553ed-d73b-49c8-be06-e9ad0542d8da\") " pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.652529 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714187 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714266 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714302 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpfj\" (UniqueName: \"kubernetes.io/projected/75e93622-05f8-4afc-868b-0a6f157fa62b-kube-api-access-plpfj\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714534 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714606 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-config\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.714807 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.725111 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.737552 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816611 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816645 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816667 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plpfj\" (UniqueName: \"kubernetes.io/projected/75e93622-05f8-4afc-868b-0a6f157fa62b-kube-api-access-plpfj\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816745 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.816766 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-config\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.817569 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-config\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.817868 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.818065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.818712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75e93622-05f8-4afc-868b-0a6f157fa62b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.827719 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.830008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.834328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e93622-05f8-4afc-868b-0a6f157fa62b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.836666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plpfj\" (UniqueName: \"kubernetes.io/projected/75e93622-05f8-4afc-868b-0a6f157fa62b-kube-api-access-plpfj\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:34 crc kubenswrapper[4782]: I1124 12:11:34.843673 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"75e93622-05f8-4afc-868b-0a6f157fa62b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:35 crc kubenswrapper[4782]: I1124 12:11:35.008616 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.250639 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.252194 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.258897 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2zl6x" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.265630 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.265737 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.265626 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.282149 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.378486 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.379122 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.379333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.380182 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.380355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26gm\" (UniqueName: \"kubernetes.io/projected/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-kube-api-access-v26gm\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.383857 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.384173 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.384396 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486458 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486496 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486516 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26gm\" (UniqueName: \"kubernetes.io/projected/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-kube-api-access-v26gm\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486616 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486661 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.486921 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.487792 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.488957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.491066 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.493724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.502171 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.502305 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.505727 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26gm\" (UniqueName: \"kubernetes.io/projected/3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3-kube-api-access-v26gm\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.517387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:38 crc kubenswrapper[4782]: I1124 12:11:38.595451 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:11:40 crc kubenswrapper[4782]: E1124 12:11:40.545659 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 12:11:40 crc kubenswrapper[4782]: E1124 12:11:40.546177 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fth46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(76bab5be-7cad-4dba-a4f4-bd53ab7f53fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:11:40 crc kubenswrapper[4782]: E1124 12:11:40.548218 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" Nov 24 12:11:41 crc kubenswrapper[4782]: E1124 12:11:41.388253 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.478104 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.479006 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7p62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vnz6r_openstack(d60fcad8-53c7-4a04-9fc9-3ed8189da17c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.480253 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" podUID="d60fcad8-53c7-4a04-9fc9-3ed8189da17c" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.572135 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.572346 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htg76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-znq6j_openstack(44769636-e101-47e1-aaad-8ea51d7ef5f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.574452 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" podUID="44769636-e101-47e1-aaad-8ea51d7ef5f0" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.580559 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.580683 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-p8crc_openstack(32cceb35-3bf5-4e4b-b338-296eb56cf073): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.582625 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.623917 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.624126 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kvkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-9xbk5_openstack(05c76f00-86eb-4645-85cc-16264a012985): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:11:48 crc kubenswrapper[4782]: E1124 12:11:48.625845 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" podUID="05c76f00-86eb-4645-85cc-16264a012985" Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.188203 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:11:49 crc kubenswrapper[4782]: W1124 12:11:49.202686 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb66c75fd_ec79_4997_9e45_70865f612c8f.slice/crio-f85877f3a4ab789e68ff2305fdfe5d3a88222015ae967cbe62dacaf87df1a9d5 WatchSource:0}: Error finding container f85877f3a4ab789e68ff2305fdfe5d3a88222015ae967cbe62dacaf87df1a9d5: Status 404 returned error can't find the container with id f85877f3a4ab789e68ff2305fdfe5d3a88222015ae967cbe62dacaf87df1a9d5 Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.322800 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:11:49 crc kubenswrapper[4782]: W1124 12:11:49.327775 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579fda47_7251_4722_b19c_eadbf6aaba21.slice/crio-c50a8fe0439276a96fd5cb07b0923a07aef7ea165bd55575b556c90086ed2623 WatchSource:0}: Error finding container c50a8fe0439276a96fd5cb07b0923a07aef7ea165bd55575b556c90086ed2623: Status 404 returned error can't find the container with id c50a8fe0439276a96fd5cb07b0923a07aef7ea165bd55575b556c90086ed2623 Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.328341 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.348589 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b"] Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.374413 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:11:49 crc kubenswrapper[4782]: W1124 12:11:49.381668 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb33a2a59_697b_4973_b01d_5933d2319593.slice/crio-1be35a2e73cf5bdeb139793fab32eed9f0d68895f3cc97c63dc2b12a4e192be9 WatchSource:0}: Error finding container 1be35a2e73cf5bdeb139793fab32eed9f0d68895f3cc97c63dc2b12a4e192be9: Status 404 returned error can't find the container with id 1be35a2e73cf5bdeb139793fab32eed9f0d68895f3cc97c63dc2b12a4e192be9 Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.443386 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b33a2a59-697b-4973-b01d-5933d2319593","Type":"ContainerStarted","Data":"1be35a2e73cf5bdeb139793fab32eed9f0d68895f3cc97c63dc2b12a4e192be9"} Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.444627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b" event={"ID":"a62553ed-d73b-49c8-be06-e9ad0542d8da","Type":"ContainerStarted","Data":"0accd7aa6c9fb5ef26c5b191cc49a13886d9ec8f50c1c71b1fd4d855d290ac93"} Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.448822 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16"} Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.456639 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b66c75fd-ec79-4997-9e45-70865f612c8f","Type":"ContainerStarted","Data":"f85877f3a4ab789e68ff2305fdfe5d3a88222015ae967cbe62dacaf87df1a9d5"} Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.457935 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"579fda47-7251-4722-b19c-eadbf6aaba21","Type":"ContainerStarted","Data":"c50a8fe0439276a96fd5cb07b0923a07aef7ea165bd55575b556c90086ed2623"} Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.460162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2","Type":"ContainerStarted","Data":"4bb59431bc2cf32f742ed0eb7e4fbadf3bed58a1859b3d836248c412325215ac"} Nov 24 12:11:49 crc kubenswrapper[4782]: E1124 12:11:49.462321 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" podUID="05c76f00-86eb-4645-85cc-16264a012985" Nov 24 12:11:49 crc kubenswrapper[4782]: E1124 12:11:49.476069 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.767218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.905437 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.914175 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htg76\" (UniqueName: \"kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76\") pod \"44769636-e101-47e1-aaad-8ea51d7ef5f0\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.914401 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config\") pod \"44769636-e101-47e1-aaad-8ea51d7ef5f0\" (UID: \"44769636-e101-47e1-aaad-8ea51d7ef5f0\") " Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.915490 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config" (OuterVolumeSpecName: "config") pod "44769636-e101-47e1-aaad-8ea51d7ef5f0" (UID: "44769636-e101-47e1-aaad-8ea51d7ef5f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:49 crc kubenswrapper[4782]: I1124 12:11:49.922320 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76" (OuterVolumeSpecName: "kube-api-access-htg76") pod "44769636-e101-47e1-aaad-8ea51d7ef5f0" (UID: "44769636-e101-47e1-aaad-8ea51d7ef5f0"). InnerVolumeSpecName "kube-api-access-htg76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.015836 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7p62\" (UniqueName: \"kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62\") pod \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.015952 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config\") pod \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.016091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc\") pod \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\" (UID: \"d60fcad8-53c7-4a04-9fc9-3ed8189da17c\") " Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.016436 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htg76\" (UniqueName: \"kubernetes.io/projected/44769636-e101-47e1-aaad-8ea51d7ef5f0-kube-api-access-htg76\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.016452 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44769636-e101-47e1-aaad-8ea51d7ef5f0-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.016912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d60fcad8-53c7-4a04-9fc9-3ed8189da17c" (UID: "d60fcad8-53c7-4a04-9fc9-3ed8189da17c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.017102 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config" (OuterVolumeSpecName: "config") pod "d60fcad8-53c7-4a04-9fc9-3ed8189da17c" (UID: "d60fcad8-53c7-4a04-9fc9-3ed8189da17c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.018898 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62" (OuterVolumeSpecName: "kube-api-access-j7p62") pod "d60fcad8-53c7-4a04-9fc9-3ed8189da17c" (UID: "d60fcad8-53c7-4a04-9fc9-3ed8189da17c"). InnerVolumeSpecName "kube-api-access-j7p62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.118123 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7p62\" (UniqueName: \"kubernetes.io/projected/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-kube-api-access-j7p62\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.118197 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.118214 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60fcad8-53c7-4a04-9fc9-3ed8189da17c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.158823 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7bqn5"] Nov 24 12:11:50 crc kubenswrapper[4782]: W1124 12:11:50.171581 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4836b782_f203_42c9_95f7_58a33a861aa1.slice/crio-57c2ffcb10448e2626be5d70fdb34c6e6323c1ab4b4f44eaa7267807033f867f WatchSource:0}: Error finding container 57c2ffcb10448e2626be5d70fdb34c6e6323c1ab4b4f44eaa7267807033f867f: Status 404 returned error can't find the container with id 57c2ffcb10448e2626be5d70fdb34c6e6323c1ab4b4f44eaa7267807033f867f Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.466629 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7bqn5" event={"ID":"4836b782-f203-42c9-95f7-58a33a861aa1","Type":"ContainerStarted","Data":"57c2ffcb10448e2626be5d70fdb34c6e6323c1ab4b4f44eaa7267807033f867f"} Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.468024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.468520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnz6r" event={"ID":"d60fcad8-53c7-4a04-9fc9-3ed8189da17c","Type":"ContainerDied","Data":"4f8c44a902240e6b3a00f8f8c8b14de948412b98971b64a81e382c2dd271e25d"} Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.470589 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerStarted","Data":"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f"} Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.472601 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.472667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-znq6j" event={"ID":"44769636-e101-47e1-aaad-8ea51d7ef5f0","Type":"ContainerDied","Data":"120049989f3a5a70c218d6d2f5ba1fc8222bf838374f8dbe45779f463fd2fb19"} Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.547918 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.554575 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-znq6j"] Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.607983 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.614153 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnz6r"] Nov 24 12:11:50 crc kubenswrapper[4782]: I1124 12:11:50.712593 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:11:51 crc kubenswrapper[4782]: I1124 12:11:51.506183 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44769636-e101-47e1-aaad-8ea51d7ef5f0" path="/var/lib/kubelet/pods/44769636-e101-47e1-aaad-8ea51d7ef5f0/volumes" Nov 24 12:11:51 crc kubenswrapper[4782]: I1124 12:11:51.507014 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60fcad8-53c7-4a04-9fc9-3ed8189da17c" path="/var/lib/kubelet/pods/d60fcad8-53c7-4a04-9fc9-3ed8189da17c/volumes" Nov 24 12:11:51 crc kubenswrapper[4782]: W1124 12:11:51.599403 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d8f0a45_8bf4_4c35_a6bf_1df2b2f860c3.slice/crio-33dd2c705e5b6efb0a9dbb9475ad272a6dbbea8101bb22131d5cdc93fe1ede4b WatchSource:0}: Error finding container 33dd2c705e5b6efb0a9dbb9475ad272a6dbbea8101bb22131d5cdc93fe1ede4b: Status 404 returned error can't find the container with id 33dd2c705e5b6efb0a9dbb9475ad272a6dbbea8101bb22131d5cdc93fe1ede4b Nov 24 12:11:51 crc kubenswrapper[4782]: I1124 12:11:51.792584 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:11:51 crc kubenswrapper[4782]: W1124 12:11:51.861089 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e93622_05f8_4afc_868b_0a6f157fa62b.slice/crio-46757a9c4de6e2ca167db6138c54b9074f5e0b78ee258efbb5866864cc9dad07 WatchSource:0}: Error finding container 46757a9c4de6e2ca167db6138c54b9074f5e0b78ee258efbb5866864cc9dad07: Status 404 returned error can't find the container with id 46757a9c4de6e2ca167db6138c54b9074f5e0b78ee258efbb5866864cc9dad07 Nov 24 12:11:52 crc kubenswrapper[4782]: I1124 12:11:52.488118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3","Type":"ContainerStarted","Data":"33dd2c705e5b6efb0a9dbb9475ad272a6dbbea8101bb22131d5cdc93fe1ede4b"} Nov 24 12:11:52 crc kubenswrapper[4782]: I1124 12:11:52.489320 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75e93622-05f8-4afc-868b-0a6f157fa62b","Type":"ContainerStarted","Data":"46757a9c4de6e2ca167db6138c54b9074f5e0b78ee258efbb5866864cc9dad07"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.539584 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b66c75fd-ec79-4997-9e45-70865f612c8f","Type":"ContainerStarted","Data":"e95f52b66d46a441733ffbd5d3b1ffe4f2f08dcee5e24a535a41f1445acf493d"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.542717 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2","Type":"ContainerStarted","Data":"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.542827 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.543991 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"579fda47-7251-4722-b19c-eadbf6aaba21","Type":"ContainerStarted","Data":"2ffd0ada54956c89067932868487c1112b1d7a7da28eb9bd10d99225045302cb"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.544105 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.545674 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b33a2a59-697b-4973-b01d-5933d2319593","Type":"ContainerStarted","Data":"178ecd2a7ae0a83893e6adf6f029b0badca22ff40ee10509ace4e2c5307a0338"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.547072 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b" event={"ID":"a62553ed-d73b-49c8-be06-e9ad0542d8da","Type":"ContainerStarted","Data":"1d08b4682354f50a8365db147bc906e0dce0aed0a03c1573809bfc2614457eb8"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.547645 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-m6c9b" Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.549110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7bqn5" event={"ID":"4836b782-f203-42c9-95f7-58a33a861aa1","Type":"ContainerStarted","Data":"151857c15753f36fdc5d3a0ae12e3b14972675f02dbcb051dc8dbc69870404d2"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.550252 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3","Type":"ContainerStarted","Data":"002e7650a5a3e04dac9333c4fa5892b8603cd547e0d9de591d2cbef03fbd6184"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.551512 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75e93622-05f8-4afc-868b-0a6f157fa62b","Type":"ContainerStarted","Data":"63cbfe6b63fc8774207f36c66f82c763d1fc2a59c7fdae3f033c0ebf3f64fb1f"} Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.592079 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m6c9b" podStartSLOduration=16.179319766 podStartE2EDuration="24.592057709s" podCreationTimestamp="2025-11-24 12:11:34 +0000 UTC" firstStartedPulling="2025-11-24 12:11:49.36883897 +0000 UTC m=+958.612672739" lastFinishedPulling="2025-11-24 12:11:57.781576913 +0000 UTC m=+967.025410682" observedRunningTime="2025-11-24 12:11:58.58311946 +0000 UTC m=+967.826953229" watchObservedRunningTime="2025-11-24 12:11:58.592057709 +0000 UTC m=+967.835891488" Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.607562 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.591926215 podStartE2EDuration="30.607548734s" podCreationTimestamp="2025-11-24 12:11:28 +0000 UTC" firstStartedPulling="2025-11-24 12:11:49.330696459 +0000 UTC m=+958.574530228" lastFinishedPulling="2025-11-24 12:11:52.346318978 +0000 UTC m=+961.590152747" observedRunningTime="2025-11-24 12:11:58.604605735 +0000 UTC m=+967.848439514" watchObservedRunningTime="2025-11-24 12:11:58.607548734 +0000 UTC m=+967.851382503" Nov 24 12:11:58 crc kubenswrapper[4782]: I1124 12:11:58.626580 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.076457652 podStartE2EDuration="27.626558853s" podCreationTimestamp="2025-11-24 12:11:31 +0000 UTC" firstStartedPulling="2025-11-24 12:11:49.344719834 +0000 UTC m=+958.588553603" lastFinishedPulling="2025-11-24 12:11:57.894821035 +0000 UTC m=+967.138654804" observedRunningTime="2025-11-24 12:11:58.621343093 +0000 UTC m=+967.865176862" watchObservedRunningTime="2025-11-24 12:11:58.626558853 +0000 UTC m=+967.870392652" Nov 24 12:11:59 crc kubenswrapper[4782]: I1124 12:11:59.561208 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerStarted","Data":"2b7a8ecd6eae3c7f121e653b6d11687117e4f39b41b193458a93c02b4f52a8da"} Nov 24 12:11:59 crc kubenswrapper[4782]: I1124 12:11:59.563419 4782 generic.go:334] "Generic (PLEG): container finished" podID="4836b782-f203-42c9-95f7-58a33a861aa1" containerID="151857c15753f36fdc5d3a0ae12e3b14972675f02dbcb051dc8dbc69870404d2" exitCode=0 Nov 24 12:11:59 crc kubenswrapper[4782]: I1124 12:11:59.563553 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7bqn5" event={"ID":"4836b782-f203-42c9-95f7-58a33a861aa1","Type":"ContainerDied","Data":"151857c15753f36fdc5d3a0ae12e3b14972675f02dbcb051dc8dbc69870404d2"} Nov 24 12:12:00 crc kubenswrapper[4782]: I1124 12:12:00.574064 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7bqn5" event={"ID":"4836b782-f203-42c9-95f7-58a33a861aa1","Type":"ContainerStarted","Data":"0b5b992ba3d6e92757c0b0b2f82869175479e41abbd677d22e8880197fe5ad9f"} Nov 24 12:12:00 crc kubenswrapper[4782]: I1124 12:12:00.574714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7bqn5" event={"ID":"4836b782-f203-42c9-95f7-58a33a861aa1","Type":"ContainerStarted","Data":"66bd66f0caa69e8365619bdae5c20c8bda5f91c57a72752161e777741019e414"} Nov 24 12:12:00 crc kubenswrapper[4782]: I1124 12:12:00.574734 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:12:00 crc kubenswrapper[4782]: I1124 12:12:00.601246 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7bqn5" podStartSLOduration=18.994827879 podStartE2EDuration="26.601224146s" podCreationTimestamp="2025-11-24 12:11:34 +0000 UTC" firstStartedPulling="2025-11-24 12:11:50.175065983 +0000 UTC m=+959.418899752" lastFinishedPulling="2025-11-24 12:11:57.78146225 +0000 UTC m=+967.025296019" observedRunningTime="2025-11-24 12:12:00.595369719 +0000 UTC m=+969.839203508" watchObservedRunningTime="2025-11-24 12:12:00.601224146 +0000 UTC m=+969.845057925" Nov 24 12:12:01 crc kubenswrapper[4782]: I1124 12:12:01.583054 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:12:04 crc kubenswrapper[4782]: I1124 12:12:04.131243 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 12:12:07 crc kubenswrapper[4782]: I1124 12:12:07.631717 4782 generic.go:334] "Generic (PLEG): container finished" podID="b33a2a59-697b-4973-b01d-5933d2319593" containerID="178ecd2a7ae0a83893e6adf6f029b0badca22ff40ee10509ace4e2c5307a0338" exitCode=0 Nov 24 12:12:07 crc kubenswrapper[4782]: I1124 12:12:07.632045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b33a2a59-697b-4973-b01d-5933d2319593","Type":"ContainerDied","Data":"178ecd2a7ae0a83893e6adf6f029b0badca22ff40ee10509ace4e2c5307a0338"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.645313 4782 generic.go:334] "Generic (PLEG): container finished" podID="b66c75fd-ec79-4997-9e45-70865f612c8f" containerID="e95f52b66d46a441733ffbd5d3b1ffe4f2f08dcee5e24a535a41f1445acf493d" exitCode=0 Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.645401 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b66c75fd-ec79-4997-9e45-70865f612c8f","Type":"ContainerDied","Data":"e95f52b66d46a441733ffbd5d3b1ffe4f2f08dcee5e24a535a41f1445acf493d"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.648519 4782 generic.go:334] "Generic (PLEG): container finished" podID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerID="5fe72ef003ac352caeed44546d4b204a2092934d4d927f32fcd8a26eb1a8db4c" exitCode=0 Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.648582 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" event={"ID":"32cceb35-3bf5-4e4b-b338-296eb56cf073","Type":"ContainerDied","Data":"5fe72ef003ac352caeed44546d4b204a2092934d4d927f32fcd8a26eb1a8db4c"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.651263 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b33a2a59-697b-4973-b01d-5933d2319593","Type":"ContainerStarted","Data":"978bf713d8a05bc2c5bdee7fe96b7768abad1df4e55810b1687e9a6314f79048"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.653600 4782 generic.go:334] "Generic (PLEG): container finished" podID="05c76f00-86eb-4645-85cc-16264a012985" containerID="28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8" exitCode=0 Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.653681 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" event={"ID":"05c76f00-86eb-4645-85cc-16264a012985","Type":"ContainerDied","Data":"28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.658505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3","Type":"ContainerStarted","Data":"99a8bc0b92eb4de0e8f904a23fdb8fcbcd4c6b425829db22c566bc57ff334e57"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.660540 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75e93622-05f8-4afc-868b-0a6f157fa62b","Type":"ContainerStarted","Data":"c6d19a72112518ab05ce01494b9a7481b18b46b74fe45137bee65f6391b9e9d9"} Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.700352 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.802547441 podStartE2EDuration="31.700330172s" podCreationTimestamp="2025-11-24 12:11:37 +0000 UTC" firstStartedPulling="2025-11-24 12:11:51.601988752 +0000 UTC m=+960.845822521" lastFinishedPulling="2025-11-24 12:12:07.499771483 +0000 UTC m=+976.743605252" observedRunningTime="2025-11-24 12:12:08.694687461 +0000 UTC m=+977.938521230" watchObservedRunningTime="2025-11-24 12:12:08.700330172 +0000 UTC m=+977.944163941" Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.784032 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.10393986 podStartE2EDuration="35.784015193s" podCreationTimestamp="2025-11-24 12:11:33 +0000 UTC" firstStartedPulling="2025-11-24 12:11:51.869145394 +0000 UTC m=+961.112979173" lastFinishedPulling="2025-11-24 12:12:07.549220737 +0000 UTC m=+976.793054506" observedRunningTime="2025-11-24 12:12:08.741907585 +0000 UTC m=+977.985741364" watchObservedRunningTime="2025-11-24 12:12:08.784015193 +0000 UTC m=+978.027848962" Nov 24 12:12:08 crc kubenswrapper[4782]: I1124 12:12:08.806029 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=34.807034574 podStartE2EDuration="42.80596817s" podCreationTimestamp="2025-11-24 12:11:26 +0000 UTC" firstStartedPulling="2025-11-24 12:11:49.385021143 +0000 UTC m=+958.628854912" lastFinishedPulling="2025-11-24 12:11:57.383954739 +0000 UTC m=+966.627788508" observedRunningTime="2025-11-24 12:12:08.799308992 +0000 UTC m=+978.043142761" watchObservedRunningTime="2025-11-24 12:12:08.80596817 +0000 UTC m=+978.049801949" Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.669230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" event={"ID":"05c76f00-86eb-4645-85cc-16264a012985","Type":"ContainerStarted","Data":"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2"} Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.670758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.672542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b66c75fd-ec79-4997-9e45-70865f612c8f","Type":"ContainerStarted","Data":"7324e94067c18efbbcdf9dfb2c23ebce6bfd215a00bb774623fa13becc3c801b"} Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.675571 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" event={"ID":"32cceb35-3bf5-4e4b-b338-296eb56cf073","Type":"ContainerStarted","Data":"458807316e00c6a29a04c7d99a0a4001d3fb92f11862f406a322d9f014d235b5"} Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.676025 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.690389 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" podStartSLOduration=3.67145302 podStartE2EDuration="45.690361266s" podCreationTimestamp="2025-11-24 12:11:24 +0000 UTC" firstStartedPulling="2025-11-24 12:11:25.575705126 +0000 UTC m=+934.819538895" lastFinishedPulling="2025-11-24 12:12:07.594613372 +0000 UTC m=+976.838447141" observedRunningTime="2025-11-24 12:12:09.686080472 +0000 UTC m=+978.929914251" watchObservedRunningTime="2025-11-24 12:12:09.690361266 +0000 UTC m=+978.934195025" Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.707802 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" podStartSLOduration=3.4526922239999998 podStartE2EDuration="45.707789143s" podCreationTimestamp="2025-11-24 12:11:24 +0000 UTC" firstStartedPulling="2025-11-24 12:11:25.294134868 +0000 UTC m=+934.537968637" lastFinishedPulling="2025-11-24 12:12:07.549231787 +0000 UTC m=+976.793065556" observedRunningTime="2025-11-24 12:12:09.704671699 +0000 UTC m=+978.948505468" watchObservedRunningTime="2025-11-24 12:12:09.707789143 +0000 UTC m=+978.951622912" Nov 24 12:12:09 crc kubenswrapper[4782]: I1124 12:12:09.729223 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.76761849 podStartE2EDuration="42.729200816s" podCreationTimestamp="2025-11-24 12:11:27 +0000 UTC" firstStartedPulling="2025-11-24 12:11:49.204220683 +0000 UTC m=+958.448054452" lastFinishedPulling="2025-11-24 12:11:57.165803009 +0000 UTC m=+966.409636778" observedRunningTime="2025-11-24 12:12:09.728563719 +0000 UTC m=+978.972397508" watchObservedRunningTime="2025-11-24 12:12:09.729200816 +0000 UTC m=+978.973034595" Nov 24 12:12:10 crc kubenswrapper[4782]: I1124 12:12:10.008889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.008885 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.059830 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.586020 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.595832 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.683599 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.701490 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="dnsmasq-dns" containerID="cri-o://2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2" gracePeriod=10 Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.737774 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.738958 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.762032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.762409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.762562 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzv2c\" (UniqueName: \"kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.767200 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.793898 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.830536 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.833230 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.933048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.933159 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.933181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzv2c\" (UniqueName: \"kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.936229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.943747 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.947226 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 12:12:11 crc kubenswrapper[4782]: I1124 12:12:11.970179 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzv2c\" (UniqueName: \"kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c\") pod \"dnsmasq-dns-7cb5889db5-pdc2b\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.044008 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.044267 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="dnsmasq-dns" containerID="cri-o://458807316e00c6a29a04c7d99a0a4001d3fb92f11862f406a322d9f014d235b5" gracePeriod=10 Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.077522 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-rlfbc"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.080764 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.082620 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.090053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.102545 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-rlfbc"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.148127 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hjt97"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.149432 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.156713 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.159960 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hjt97"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95804fb9-455c-4226-acb2-97418cd75b7e-config\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241413 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241428 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovs-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241452 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241482 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhkzk\" (UniqueName: \"kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241499 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovn-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241517 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6fs\" (UniqueName: \"kubernetes.io/projected/95804fb9-455c-4226-acb2-97418cd75b7e-kube-api-access-lv6fs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-combined-ca-bundle\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.241592 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.325512 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.328789 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.333243 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95804fb9-455c-4226-acb2-97418cd75b7e-config\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343482 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343499 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovs-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343519 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhkzk\" (UniqueName: \"kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343567 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovn-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343584 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6fs\" (UniqueName: \"kubernetes.io/projected/95804fb9-455c-4226-acb2-97418cd75b7e-kube-api-access-lv6fs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-combined-ca-bundle\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343672 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.343829 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovs-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.344074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95804fb9-455c-4226-acb2-97418cd75b7e-ovn-rundir\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.344513 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.344563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.344636 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95804fb9-455c-4226-acb2-97418cd75b7e-config\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.346619 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.353608 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.353806 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xn74k" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.353999 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.354132 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.354309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.369946 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6fs\" (UniqueName: \"kubernetes.io/projected/95804fb9-455c-4226-acb2-97418cd75b7e-kube-api-access-lv6fs\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.377606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95804fb9-455c-4226-acb2-97418cd75b7e-combined-ca-bundle\") pod \"ovn-controller-metrics-hjt97\" (UID: \"95804fb9-455c-4226-acb2-97418cd75b7e\") " pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.382185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-rlfbc"] Nov 24 12:12:12 crc kubenswrapper[4782]: E1124 12:12:12.383418 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hhkzk], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" podUID="253c23a8-1c16-4771-b8ae-431577631442" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.387961 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhkzk\" (UniqueName: \"kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk\") pod \"dnsmasq-dns-57d65f699f-rlfbc\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.448873 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455568 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455610 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455668 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-scripts\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455776 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-config\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.455800 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkv5\" (UniqueName: \"kubernetes.io/projected/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-kube-api-access-slkv5\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.463826 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.463927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.469688 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.554713 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557637 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557686 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557709 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557755 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-scripts\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-config\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557859 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557877 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkv5\" (UniqueName: \"kubernetes.io/projected/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-kube-api-access-slkv5\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557894 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.557964 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtldd\" (UniqueName: \"kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.565493 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.566768 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.567129 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-config\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.567707 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-scripts\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.570608 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.570826 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.591247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkv5\" (UniqueName: \"kubernetes.io/projected/4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0-kube-api-access-slkv5\") pod \"ovn-northd-0\" (UID: \"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0\") " pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.638188 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hjt97" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.658680 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config\") pod \"05c76f00-86eb-4645-85cc-16264a012985\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.658734 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvkx\" (UniqueName: \"kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx\") pod \"05c76f00-86eb-4645-85cc-16264a012985\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.658763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc\") pod \"05c76f00-86eb-4645-85cc-16264a012985\" (UID: \"05c76f00-86eb-4645-85cc-16264a012985\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.659081 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.659113 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtldd\" (UniqueName: \"kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.659185 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.659227 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.659247 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.660081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.663796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.665126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.667885 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.677616 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx" (OuterVolumeSpecName: "kube-api-access-4kvkx") pod "05c76f00-86eb-4645-85cc-16264a012985" (UID: "05c76f00-86eb-4645-85cc-16264a012985"). InnerVolumeSpecName "kube-api-access-4kvkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.683508 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtldd\" (UniqueName: \"kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd\") pod \"dnsmasq-dns-b8fbc5445-bs6cm\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.706210 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05c76f00-86eb-4645-85cc-16264a012985" (UID: "05c76f00-86eb-4645-85cc-16264a012985"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.720657 4782 generic.go:334] "Generic (PLEG): container finished" podID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerID="458807316e00c6a29a04c7d99a0a4001d3fb92f11862f406a322d9f014d235b5" exitCode=0 Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.720724 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" event={"ID":"32cceb35-3bf5-4e4b-b338-296eb56cf073","Type":"ContainerDied","Data":"458807316e00c6a29a04c7d99a0a4001d3fb92f11862f406a322d9f014d235b5"} Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.727060 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config" (OuterVolumeSpecName: "config") pod "05c76f00-86eb-4645-85cc-16264a012985" (UID: "05c76f00-86eb-4645-85cc-16264a012985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.728958 4782 generic.go:334] "Generic (PLEG): container finished" podID="05c76f00-86eb-4645-85cc-16264a012985" containerID="2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2" exitCode=0 Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.729790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.732573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" event={"ID":"05c76f00-86eb-4645-85cc-16264a012985","Type":"ContainerDied","Data":"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2"} Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.732618 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9xbk5" event={"ID":"05c76f00-86eb-4645-85cc-16264a012985","Type":"ContainerDied","Data":"0900c697a98fedf576c7fae81345bcef1b8cc4c13f0d6f056b09d6fee58a96c0"} Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.732636 4782 scope.go:117] "RemoveContainer" containerID="2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.732794 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.748658 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.766999 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.767018 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvkx\" (UniqueName: \"kubernetes.io/projected/05c76f00-86eb-4645-85cc-16264a012985-kube-api-access-4kvkx\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.767029 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c76f00-86eb-4645-85cc-16264a012985-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.773083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.781065 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.784441 4782 scope.go:117] "RemoveContainer" containerID="28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.787428 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9xbk5"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.808345 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.836208 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.855665 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:12:12 crc kubenswrapper[4782]: E1124 12:12:12.857541 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="dnsmasq-dns" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.857565 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="dnsmasq-dns" Nov 24 12:12:12 crc kubenswrapper[4782]: E1124 12:12:12.857587 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="init" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.857595 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="init" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.857818 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c76f00-86eb-4645-85cc-16264a012985" containerName="dnsmasq-dns" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.868508 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc\") pod \"253c23a8-1c16-4771-b8ae-431577631442\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.868607 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config\") pod \"253c23a8-1c16-4771-b8ae-431577631442\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.868672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhkzk\" (UniqueName: \"kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk\") pod \"253c23a8-1c16-4771-b8ae-431577631442\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.868714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb\") pod \"253c23a8-1c16-4771-b8ae-431577631442\" (UID: \"253c23a8-1c16-4771-b8ae-431577631442\") " Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.869256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config" (OuterVolumeSpecName: "config") pod "253c23a8-1c16-4771-b8ae-431577631442" (UID: "253c23a8-1c16-4771-b8ae-431577631442"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.869889 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "253c23a8-1c16-4771-b8ae-431577631442" (UID: "253c23a8-1c16-4771-b8ae-431577631442"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.871474 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "253c23a8-1c16-4771-b8ae-431577631442" (UID: "253c23a8-1c16-4771-b8ae-431577631442"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.876714 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.880037 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dmdww" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.880397 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.883102 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk" (OuterVolumeSpecName: "kube-api-access-hhkzk") pod "253c23a8-1c16-4771-b8ae-431577631442" (UID: "253c23a8-1c16-4771-b8ae-431577631442"). InnerVolumeSpecName "kube-api-access-hhkzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.883502 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.894154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.895814 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.906579 4782 scope.go:117] "RemoveContainer" containerID="2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2" Nov 24 12:12:12 crc kubenswrapper[4782]: E1124 12:12:12.909896 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2\": container with ID starting with 2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2 not found: ID does not exist" containerID="2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.909936 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2"} err="failed to get container status \"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2\": rpc error: code = NotFound desc = could not find container \"2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2\": container with ID starting with 2a4dafb51759e8c2c05891ec0cb5b6d4036d49017e134262635907835b915dd2 not found: ID does not exist" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.909965 4782 scope.go:117] "RemoveContainer" containerID="28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8" Nov 24 12:12:12 crc kubenswrapper[4782]: E1124 12:12:12.911079 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8\": container with ID starting with 28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8 not found: ID does not exist" containerID="28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.912596 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8"} err="failed to get container status \"28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8\": rpc error: code = NotFound desc = could not find container \"28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8\": container with ID starting with 28fce07ba10018a872221513c130146efa3f201b302be3f931852e62e23703b8 not found: ID does not exist" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973434 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-lock\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973491 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-cache\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973542 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973635 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvv8\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-kube-api-access-jxvv8\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973970 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.973995 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhkzk\" (UniqueName: \"kubernetes.io/projected/253c23a8-1c16-4771-b8ae-431577631442-kube-api-access-hhkzk\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.974009 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:12 crc kubenswrapper[4782]: I1124 12:12:12.974022 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253c23a8-1c16-4771-b8ae-431577631442-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.087549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-lock\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.087620 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-cache\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.087643 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.087678 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.087778 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvv8\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-kube-api-access-jxvv8\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.088315 4782 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.088329 4782 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.088644 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-lock\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.088431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81dbdeba-8b69-4638-b076-29f9edaeffa6-cache\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.088894 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.095499 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift podName:81dbdeba-8b69-4638-b076-29f9edaeffa6 nodeName:}" failed. No retries permitted until 2025-11-24 12:12:13.588351712 +0000 UTC m=+982.832185481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift") pod "swift-storage-0" (UID: "81dbdeba-8b69-4638-b076-29f9edaeffa6") : configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.096129 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.120729 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hjt97"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.145093 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvv8\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-kube-api-access-jxvv8\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.167002 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.188513 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhcn\" (UniqueName: \"kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn\") pod \"32cceb35-3bf5-4e4b-b338-296eb56cf073\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.188900 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc\") pod \"32cceb35-3bf5-4e4b-b338-296eb56cf073\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.189048 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config\") pod \"32cceb35-3bf5-4e4b-b338-296eb56cf073\" (UID: \"32cceb35-3bf5-4e4b-b338-296eb56cf073\") " Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.193353 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn" (OuterVolumeSpecName: "kube-api-access-zfhcn") pod "32cceb35-3bf5-4e4b-b338-296eb56cf073" (UID: "32cceb35-3bf5-4e4b-b338-296eb56cf073"). InnerVolumeSpecName "kube-api-access-zfhcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.271343 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config" (OuterVolumeSpecName: "config") pod "32cceb35-3bf5-4e4b-b338-296eb56cf073" (UID: "32cceb35-3bf5-4e4b-b338-296eb56cf073"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.272771 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32cceb35-3bf5-4e4b-b338-296eb56cf073" (UID: "32cceb35-3bf5-4e4b-b338-296eb56cf073"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.292337 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhcn\" (UniqueName: \"kubernetes.io/projected/32cceb35-3bf5-4e4b-b338-296eb56cf073-kube-api-access-zfhcn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.292365 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.292441 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cceb35-3bf5-4e4b-b338-296eb56cf073-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.359694 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dnt8l"] Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.360056 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="dnsmasq-dns" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.360071 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="dnsmasq-dns" Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.360093 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="init" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.360099 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="init" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.360277 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" containerName="dnsmasq-dns" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.360815 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.366675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.368435 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.368757 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.370882 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dnt8l"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395616 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395749 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395932 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdtv\" (UniqueName: \"kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.395959 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.396030 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.439362 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.499165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.499229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.499304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.499335 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdtv\" (UniqueName: \"kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.499360 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.500846 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.500930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.501575 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.501598 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.501722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.501752 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c76f00-86eb-4645-85cc-16264a012985" path="/var/lib/kubelet/pods/05c76f00-86eb-4645-85cc-16264a012985/volumes" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.507501 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.519222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.524004 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.524853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdtv\" (UniqueName: \"kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv\") pod \"swift-ring-rebalance-dnt8l\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.602114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.602599 4782 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.602614 4782 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: E1124 12:12:13.602672 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift podName:81dbdeba-8b69-4638-b076-29f9edaeffa6 nodeName:}" failed. No retries permitted until 2025-11-24 12:12:14.6026586 +0000 UTC m=+983.846492369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift") pod "swift-storage-0" (UID: "81dbdeba-8b69-4638-b076-29f9edaeffa6") : configmap "swift-ring-files" not found Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.641257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:12:13 crc kubenswrapper[4782]: W1124 12:12:13.662809 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c0393b0_4e2a_449c_ad19_aaddc8017944.slice/crio-fd68247b75c80ba03e467e32a73c040e049531e77e7b6983e7493fd626375c87 WatchSource:0}: Error finding container fd68247b75c80ba03e467e32a73c040e049531e77e7b6983e7493fd626375c87: Status 404 returned error can't find the container with id fd68247b75c80ba03e467e32a73c040e049531e77e7b6983e7493fd626375c87 Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.705086 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.741595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" event={"ID":"4c0393b0-4e2a-449c-ad19-aaddc8017944","Type":"ContainerStarted","Data":"fd68247b75c80ba03e467e32a73c040e049531e77e7b6983e7493fd626375c87"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.742547 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hjt97" event={"ID":"95804fb9-455c-4226-acb2-97418cd75b7e","Type":"ContainerStarted","Data":"31409060b3ba305a1ba74ffaf473f8313f176e3c00b043d3144553d459330d43"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.742571 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hjt97" event={"ID":"95804fb9-455c-4226-acb2-97418cd75b7e","Type":"ContainerStarted","Data":"761aa8d7822251584be45e61117cf5443366f3ae0a0c221451f3cea5f326e410"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.744404 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerID="aa520cd9a15189177e5356fbcc528ef643480fc50a0d0b2fe1938a519bcfd0d0" exitCode=0 Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.744448 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" event={"ID":"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd","Type":"ContainerDied","Data":"aa520cd9a15189177e5356fbcc528ef643480fc50a0d0b2fe1938a519bcfd0d0"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.744462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" event={"ID":"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd","Type":"ContainerStarted","Data":"4dec92100876ef2f6003af855a98a6bd9917b55374388e99111c2f393eb1bdb1"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.752168 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.755087 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p8crc" event={"ID":"32cceb35-3bf5-4e4b-b338-296eb56cf073","Type":"ContainerDied","Data":"eb085a54f5b2b1a24389d82f7941a71566aa93aeec98cbd1b1fa667268da5463"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.755147 4782 scope.go:117] "RemoveContainer" containerID="458807316e00c6a29a04c7d99a0a4001d3fb92f11862f406a322d9f014d235b5" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.756753 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-rlfbc" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.756815 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0","Type":"ContainerStarted","Data":"38415569a8895d685ad2407a61ef74342ba80fc39f47e025456fab1605a42e04"} Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.774562 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hjt97" podStartSLOduration=1.7745348619999999 podStartE2EDuration="1.774534862s" podCreationTimestamp="2025-11-24 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:13.76474945 +0000 UTC m=+983.008583219" watchObservedRunningTime="2025-11-24 12:12:13.774534862 +0000 UTC m=+983.018368631" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.798745 4782 scope.go:117] "RemoveContainer" containerID="5fe72ef003ac352caeed44546d4b204a2092934d4d927f32fcd8a26eb1a8db4c" Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.833965 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.846665 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p8crc"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.862155 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-rlfbc"] Nov 24 12:12:13 crc kubenswrapper[4782]: I1124 12:12:13.874831 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-rlfbc"] Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.292098 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dnt8l"] Nov 24 12:12:14 crc kubenswrapper[4782]: W1124 12:12:14.305001 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b31b3d1_1239_45a8_9380_693d4ce10324.slice/crio-30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e WatchSource:0}: Error finding container 30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e: Status 404 returned error can't find the container with id 30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.629032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:14 crc kubenswrapper[4782]: E1124 12:12:14.629466 4782 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:12:14 crc kubenswrapper[4782]: E1124 12:12:14.629480 4782 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:12:14 crc kubenswrapper[4782]: E1124 12:12:14.629522 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift podName:81dbdeba-8b69-4638-b076-29f9edaeffa6 nodeName:}" failed. No retries permitted until 2025-11-24 12:12:16.62950865 +0000 UTC m=+985.873342419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift") pod "swift-storage-0" (UID: "81dbdeba-8b69-4638-b076-29f9edaeffa6") : configmap "swift-ring-files" not found Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.771299 4782 generic.go:334] "Generic (PLEG): container finished" podID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerID="f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd" exitCode=0 Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.771538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" event={"ID":"4c0393b0-4e2a-449c-ad19-aaddc8017944","Type":"ContainerDied","Data":"f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd"} Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.780241 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" event={"ID":"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd","Type":"ContainerStarted","Data":"a6055be165c9235d948bcc160349d566b692b228dcf1932854b96c1f7eff2baa"} Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.780446 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.783714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dnt8l" event={"ID":"8b31b3d1-1239-45a8-9380-693d4ce10324","Type":"ContainerStarted","Data":"30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e"} Nov 24 12:12:14 crc kubenswrapper[4782]: I1124 12:12:14.809575 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" podStartSLOduration=3.80955918 podStartE2EDuration="3.80955918s" podCreationTimestamp="2025-11-24 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:14.807233338 +0000 UTC m=+984.051067107" watchObservedRunningTime="2025-11-24 12:12:14.80955918 +0000 UTC m=+984.053392949" Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.507182 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253c23a8-1c16-4771-b8ae-431577631442" path="/var/lib/kubelet/pods/253c23a8-1c16-4771-b8ae-431577631442/volumes" Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.507859 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cceb35-3bf5-4e4b-b338-296eb56cf073" path="/var/lib/kubelet/pods/32cceb35-3bf5-4e4b-b338-296eb56cf073/volumes" Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.797414 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0","Type":"ContainerStarted","Data":"a27993560428278bbd6bc0f02d9224bfd64ed97282db0728dcb95b0c4800b931"} Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.797780 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.797795 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0","Type":"ContainerStarted","Data":"b0a920760045311a706bec8541a64f6017681d17b1ad07adcef30e8b525e9388"} Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.799451 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" event={"ID":"4c0393b0-4e2a-449c-ad19-aaddc8017944","Type":"ContainerStarted","Data":"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8"} Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.817162 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.265597688 podStartE2EDuration="3.817144544s" podCreationTimestamp="2025-11-24 12:12:12 +0000 UTC" firstStartedPulling="2025-11-24 12:12:13.462878938 +0000 UTC m=+982.706712697" lastFinishedPulling="2025-11-24 12:12:15.014425784 +0000 UTC m=+984.258259553" observedRunningTime="2025-11-24 12:12:15.816847326 +0000 UTC m=+985.060681105" watchObservedRunningTime="2025-11-24 12:12:15.817144544 +0000 UTC m=+985.060978313" Nov 24 12:12:15 crc kubenswrapper[4782]: I1124 12:12:15.836464 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" podStartSLOduration=3.8364441400000002 podStartE2EDuration="3.83644414s" podCreationTimestamp="2025-11-24 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:15.829942966 +0000 UTC m=+985.073776755" watchObservedRunningTime="2025-11-24 12:12:15.83644414 +0000 UTC m=+985.080277919" Nov 24 12:12:16 crc kubenswrapper[4782]: I1124 12:12:16.662909 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:16 crc kubenswrapper[4782]: E1124 12:12:16.663167 4782 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:12:16 crc kubenswrapper[4782]: E1124 12:12:16.663183 4782 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:12:16 crc kubenswrapper[4782]: E1124 12:12:16.663234 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift podName:81dbdeba-8b69-4638-b076-29f9edaeffa6 nodeName:}" failed. No retries permitted until 2025-11-24 12:12:20.663219163 +0000 UTC m=+989.907052932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift") pod "swift-storage-0" (UID: "81dbdeba-8b69-4638-b076-29f9edaeffa6") : configmap "swift-ring-files" not found Nov 24 12:12:16 crc kubenswrapper[4782]: I1124 12:12:16.806388 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:17 crc kubenswrapper[4782]: I1124 12:12:17.567470 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 12:12:17 crc kubenswrapper[4782]: I1124 12:12:17.567513 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.276235 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.276939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.356150 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.815146 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.909895 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 12:12:19 crc kubenswrapper[4782]: I1124 12:12:19.952520 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 12:12:20 crc kubenswrapper[4782]: I1124 12:12:20.740456 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:20 crc kubenswrapper[4782]: E1124 12:12:20.740677 4782 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:12:20 crc kubenswrapper[4782]: E1124 12:12:20.740882 4782 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:12:20 crc kubenswrapper[4782]: E1124 12:12:20.740953 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift podName:81dbdeba-8b69-4638-b076-29f9edaeffa6 nodeName:}" failed. No retries permitted until 2025-11-24 12:12:28.740931716 +0000 UTC m=+997.984765505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift") pod "swift-storage-0" (UID: "81dbdeba-8b69-4638-b076-29f9edaeffa6") : configmap "swift-ring-files" not found Nov 24 12:12:20 crc kubenswrapper[4782]: I1124 12:12:20.849307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dnt8l" event={"ID":"8b31b3d1-1239-45a8-9380-693d4ce10324","Type":"ContainerStarted","Data":"82ca7222eae0620bf59a1682550f9b115a5613ca49a56b0512b8b481e30f9f44"} Nov 24 12:12:20 crc kubenswrapper[4782]: I1124 12:12:20.868523 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dnt8l" podStartSLOduration=2.329647873 podStartE2EDuration="7.868504541s" podCreationTimestamp="2025-11-24 12:12:13 +0000 UTC" firstStartedPulling="2025-11-24 12:12:14.307102649 +0000 UTC m=+983.550936418" lastFinishedPulling="2025-11-24 12:12:19.845959317 +0000 UTC m=+989.089793086" observedRunningTime="2025-11-24 12:12:20.864690499 +0000 UTC m=+990.108524268" watchObservedRunningTime="2025-11-24 12:12:20.868504541 +0000 UTC m=+990.112338310" Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.091641 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.810523 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.865081 4782 generic.go:334] "Generic (PLEG): container finished" podID="819def2d-6f25-42ca-91f6-6951b7b97549" containerID="152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f" exitCode=0 Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.865120 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerDied","Data":"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f"} Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.921176 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:22 crc kubenswrapper[4782]: I1124 12:12:22.922143 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="dnsmasq-dns" containerID="cri-o://a6055be165c9235d948bcc160349d566b692b228dcf1932854b96c1f7eff2baa" gracePeriod=10 Nov 24 12:12:23 crc kubenswrapper[4782]: I1124 12:12:23.874319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerStarted","Data":"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9"} Nov 24 12:12:23 crc kubenswrapper[4782]: I1124 12:12:23.876514 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerID="a6055be165c9235d948bcc160349d566b692b228dcf1932854b96c1f7eff2baa" exitCode=0 Nov 24 12:12:23 crc kubenswrapper[4782]: I1124 12:12:23.876550 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" event={"ID":"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd","Type":"ContainerDied","Data":"a6055be165c9235d948bcc160349d566b692b228dcf1932854b96c1f7eff2baa"} Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.491041 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fa32-account-create-xcqvn"] Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.495220 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.508570 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.582147 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa32-account-create-xcqvn"] Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.613105 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ndnmn"] Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.614969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.617121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzgv\" (UniqueName: \"kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.617240 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.637343 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ndnmn"] Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.720804 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cp4c\" (UniqueName: \"kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.720860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzgv\" (UniqueName: \"kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.720881 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.720939 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.722256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.771123 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzgv\" (UniqueName: \"kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv\") pod \"glance-fa32-account-create-xcqvn\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.822365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cp4c\" (UniqueName: \"kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.822440 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.823115 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.839745 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.856731 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cp4c\" (UniqueName: \"kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c\") pod \"glance-db-create-ndnmn\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.895243 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" event={"ID":"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd","Type":"ContainerDied","Data":"4dec92100876ef2f6003af855a98a6bd9917b55374388e99111c2f393eb1bdb1"} Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.895524 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dec92100876ef2f6003af855a98a6bd9917b55374388e99111c2f393eb1bdb1" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.895547 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.926679 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.908661176 podStartE2EDuration="1m0.92666396s" podCreationTimestamp="2025-11-24 12:11:24 +0000 UTC" firstStartedPulling="2025-11-24 12:11:26.718099847 +0000 UTC m=+935.961933616" lastFinishedPulling="2025-11-24 12:11:48.736102631 +0000 UTC m=+957.979936400" observedRunningTime="2025-11-24 12:12:24.922434386 +0000 UTC m=+994.166268155" watchObservedRunningTime="2025-11-24 12:12:24.92666396 +0000 UTC m=+994.170497729" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.930417 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:24 crc kubenswrapper[4782]: I1124 12:12:24.956907 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.024880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzv2c\" (UniqueName: \"kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c\") pod \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.024940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config\") pod \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.025005 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc\") pod \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\" (UID: \"2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd\") " Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.035222 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c" (OuterVolumeSpecName: "kube-api-access-jzv2c") pod "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" (UID: "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd"). InnerVolumeSpecName "kube-api-access-jzv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.090306 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config" (OuterVolumeSpecName: "config") pod "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" (UID: "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.126768 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzv2c\" (UniqueName: \"kubernetes.io/projected/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-kube-api-access-jzv2c\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.127277 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.163028 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" (UID: "2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.228298 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.293989 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fa32-account-create-xcqvn"] Nov 24 12:12:25 crc kubenswrapper[4782]: W1124 12:12:25.299434 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd25993_6371_49a5_bf60_29da33949583.slice/crio-48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd WatchSource:0}: Error finding container 48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd: Status 404 returned error can't find the container with id 48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.392217 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ndnmn"] Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.903956 4782 generic.go:334] "Generic (PLEG): container finished" podID="bcd25993-6371-49a5-bf60-29da33949583" containerID="468de68c7d1d4fe9ebb818ad8ae0af4e375a1d422e1b6151a86ec4369db26627" exitCode=0 Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.904025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa32-account-create-xcqvn" event={"ID":"bcd25993-6371-49a5-bf60-29da33949583","Type":"ContainerDied","Data":"468de68c7d1d4fe9ebb818ad8ae0af4e375a1d422e1b6151a86ec4369db26627"} Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.904051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa32-account-create-xcqvn" event={"ID":"bcd25993-6371-49a5-bf60-29da33949583","Type":"ContainerStarted","Data":"48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd"} Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.905756 4782 generic.go:334] "Generic (PLEG): container finished" podID="82af1928-0c70-41fd-8402-52f61b5a5ccf" containerID="4a8587e1a291a5cb1726d10d4828a532dcc3ba2c205cb38d7e02606d91a90288" exitCode=0 Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.905809 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-pdc2b" Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.906445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ndnmn" event={"ID":"82af1928-0c70-41fd-8402-52f61b5a5ccf","Type":"ContainerDied","Data":"4a8587e1a291a5cb1726d10d4828a532dcc3ba2c205cb38d7e02606d91a90288"} Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.906465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ndnmn" event={"ID":"82af1928-0c70-41fd-8402-52f61b5a5ccf","Type":"ContainerStarted","Data":"ad71ae9c384f87f9207d4adfe93069d12238755165c355a1dddfc83662b39dc2"} Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.954008 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:25 crc kubenswrapper[4782]: I1124 12:12:25.961714 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-pdc2b"] Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.280940 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.373350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cp4c\" (UniqueName: \"kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c\") pod \"82af1928-0c70-41fd-8402-52f61b5a5ccf\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.373670 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts\") pod \"82af1928-0c70-41fd-8402-52f61b5a5ccf\" (UID: \"82af1928-0c70-41fd-8402-52f61b5a5ccf\") " Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.374973 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82af1928-0c70-41fd-8402-52f61b5a5ccf" (UID: "82af1928-0c70-41fd-8402-52f61b5a5ccf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.380164 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c" (OuterVolumeSpecName: "kube-api-access-7cp4c") pod "82af1928-0c70-41fd-8402-52f61b5a5ccf" (UID: "82af1928-0c70-41fd-8402-52f61b5a5ccf"). InnerVolumeSpecName "kube-api-access-7cp4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.427658 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.475566 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts\") pod \"bcd25993-6371-49a5-bf60-29da33949583\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.475831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fzgv\" (UniqueName: \"kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv\") pod \"bcd25993-6371-49a5-bf60-29da33949583\" (UID: \"bcd25993-6371-49a5-bf60-29da33949583\") " Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.476055 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcd25993-6371-49a5-bf60-29da33949583" (UID: "bcd25993-6371-49a5-bf60-29da33949583"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.476241 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd25993-6371-49a5-bf60-29da33949583-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.476265 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82af1928-0c70-41fd-8402-52f61b5a5ccf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.476278 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cp4c\" (UniqueName: \"kubernetes.io/projected/82af1928-0c70-41fd-8402-52f61b5a5ccf-kube-api-access-7cp4c\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.484056 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv" (OuterVolumeSpecName: "kube-api-access-8fzgv") pod "bcd25993-6371-49a5-bf60-29da33949583" (UID: "bcd25993-6371-49a5-bf60-29da33949583"). InnerVolumeSpecName "kube-api-access-8fzgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.500059 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" path="/var/lib/kubelet/pods/2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd/volumes" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.577759 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fzgv\" (UniqueName: \"kubernetes.io/projected/bcd25993-6371-49a5-bf60-29da33949583-kube-api-access-8fzgv\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.835742 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.922255 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fa32-account-create-xcqvn" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.922915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fa32-account-create-xcqvn" event={"ID":"bcd25993-6371-49a5-bf60-29da33949583","Type":"ContainerDied","Data":"48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd"} Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.922938 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f3204d1c70fb724f60013aafc8aee934b595d2756d0da6e15b064ee29fedfd" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.924683 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ndnmn" event={"ID":"82af1928-0c70-41fd-8402-52f61b5a5ccf","Type":"ContainerDied","Data":"ad71ae9c384f87f9207d4adfe93069d12238755165c355a1dddfc83662b39dc2"} Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.924708 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad71ae9c384f87f9207d4adfe93069d12238755165c355a1dddfc83662b39dc2" Nov 24 12:12:27 crc kubenswrapper[4782]: I1124 12:12:27.924747 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ndnmn" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.728632 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5mdhg"] Nov 24 12:12:28 crc kubenswrapper[4782]: E1124 12:12:28.728905 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="init" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.728916 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="init" Nov 24 12:12:28 crc kubenswrapper[4782]: E1124 12:12:28.728928 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="dnsmasq-dns" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.728934 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="dnsmasq-dns" Nov 24 12:12:28 crc kubenswrapper[4782]: E1124 12:12:28.728950 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af1928-0c70-41fd-8402-52f61b5a5ccf" containerName="mariadb-database-create" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.728956 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af1928-0c70-41fd-8402-52f61b5a5ccf" containerName="mariadb-database-create" Nov 24 12:12:28 crc kubenswrapper[4782]: E1124 12:12:28.728977 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd25993-6371-49a5-bf60-29da33949583" containerName="mariadb-account-create" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.728983 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd25993-6371-49a5-bf60-29da33949583" containerName="mariadb-account-create" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.729134 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd25993-6371-49a5-bf60-29da33949583" containerName="mariadb-account-create" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.729157 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b72f99b-cfd8-4bf5-9dee-31b02a4fe5cd" containerName="dnsmasq-dns" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.729164 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af1928-0c70-41fd-8402-52f61b5a5ccf" containerName="mariadb-database-create" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.729773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.736780 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5mdhg"] Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.796339 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.796424 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4zwn\" (UniqueName: \"kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.796504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.802315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81dbdeba-8b69-4638-b076-29f9edaeffa6-etc-swift\") pod \"swift-storage-0\" (UID: \"81dbdeba-8b69-4638-b076-29f9edaeffa6\") " pod="openstack/swift-storage-0" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.843410 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ec86-account-create-2vspw"] Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.844481 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.846421 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.871945 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ec86-account-create-2vspw"] Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.897688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9d98\" (UniqueName: \"kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.898044 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.898134 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.898185 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4zwn\" (UniqueName: \"kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.898830 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.918325 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4zwn\" (UniqueName: \"kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn\") pod \"keystone-db-create-5mdhg\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.932931 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b31b3d1-1239-45a8-9380-693d4ce10324" containerID="82ca7222eae0620bf59a1682550f9b115a5613ca49a56b0512b8b481e30f9f44" exitCode=0 Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.932997 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dnt8l" event={"ID":"8b31b3d1-1239-45a8-9380-693d4ce10324","Type":"ContainerDied","Data":"82ca7222eae0620bf59a1682550f9b115a5613ca49a56b0512b8b481e30f9f44"} Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.962908 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:12:28 crc kubenswrapper[4782]: I1124 12:12:28.999904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:28.999985 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9d98\" (UniqueName: \"kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.000927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.029922 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9d98\" (UniqueName: \"kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98\") pod \"keystone-ec86-account-create-2vspw\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.044843 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.064776 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rz9l9"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.066807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.080621 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rz9l9"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.101549 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.101594 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.159807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.203056 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.203120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.203868 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.210706 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-49d6-account-create-465ld"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.211995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.214787 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.220744 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-49d6-account-create-465ld"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.241883 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr\") pod \"placement-db-create-rz9l9\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.307824 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.308223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9bv7\" (UniqueName: \"kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.408290 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.409571 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.409610 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9bv7\" (UniqueName: \"kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.410516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.436136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9bv7\" (UniqueName: \"kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7\") pod \"placement-49d6-account-create-465ld\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.440435 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5mdhg"] Nov 24 12:12:29 crc kubenswrapper[4782]: W1124 12:12:29.480270 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72848b25_6c96_4159_898d_4b9e7ee158bc.slice/crio-a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41 WatchSource:0}: Error finding container a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41: Status 404 returned error can't find the container with id a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41 Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.540906 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.672447 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:12:29 crc kubenswrapper[4782]: W1124 12:12:29.683741 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81dbdeba_8b69_4638_b076_29f9edaeffa6.slice/crio-98a7ccd754a6fc776475c8fa07d3d45916b61fe40188d8d9bafaf9514ad9be2a WatchSource:0}: Error finding container 98a7ccd754a6fc776475c8fa07d3d45916b61fe40188d8d9bafaf9514ad9be2a: Status 404 returned error can't find the container with id 98a7ccd754a6fc776475c8fa07d3d45916b61fe40188d8d9bafaf9514ad9be2a Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.798170 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ec86-account-create-2vspw"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.815721 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nftg9"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.816941 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.828856 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jhhvs" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.828974 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.836129 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nftg9"] Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.862540 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m6c9b" podUID="a62553ed-d73b-49c8-be06-e9ad0542d8da" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:12:29 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:12:29 crc kubenswrapper[4782]: > Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.876695 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.926508 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.926770 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvf48\" (UniqueName: \"kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.926890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.926922 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.961903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"98a7ccd754a6fc776475c8fa07d3d45916b61fe40188d8d9bafaf9514ad9be2a"} Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.975721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5mdhg" event={"ID":"72848b25-6c96-4159-898d-4b9e7ee158bc","Type":"ContainerStarted","Data":"c98517cae9b33aaf737abcdc16cd9401952fd08cd8d095e04fd87806d3695b62"} Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.975788 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5mdhg" event={"ID":"72848b25-6c96-4159-898d-4b9e7ee158bc","Type":"ContainerStarted","Data":"a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41"} Nov 24 12:12:29 crc kubenswrapper[4782]: I1124 12:12:29.981762 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec86-account-create-2vspw" event={"ID":"763d5f9f-7507-45f0-a872-222bc321e3d4","Type":"ContainerStarted","Data":"9d0eeb5e2a73f892b34da495c8e0477b0a32bcdd1bd446938f843b93dce7cc92"} Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.028448 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.028502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.028552 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.028574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvf48\" (UniqueName: \"kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.050199 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.050396 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.051944 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.094511 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvf48\" (UniqueName: \"kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48\") pod \"glance-db-sync-nftg9\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.100434 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5mdhg" podStartSLOduration=2.1004085630000002 podStartE2EDuration="2.100408563s" podCreationTimestamp="2025-11-24 12:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:29.99497904 +0000 UTC m=+999.238812809" watchObservedRunningTime="2025-11-24 12:12:30.100408563 +0000 UTC m=+999.344242342" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.106394 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rz9l9"] Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.254798 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nftg9" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.326275 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-49d6-account-create-465ld"] Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.402392 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539081 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539150 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539235 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdtv\" (UniqueName: \"kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539403 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539461 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.539487 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf\") pod \"8b31b3d1-1239-45a8-9380-693d4ce10324\" (UID: \"8b31b3d1-1239-45a8-9380-693d4ce10324\") " Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.555816 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.555978 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv" (OuterVolumeSpecName: "kube-api-access-htdtv") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "kube-api-access-htdtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.557626 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.575811 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.612052 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts" (OuterVolumeSpecName: "scripts") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.628664 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642111 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642136 4782 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b31b3d1-1239-45a8-9380-693d4ce10324-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642147 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdtv\" (UniqueName: \"kubernetes.io/projected/8b31b3d1-1239-45a8-9380-693d4ce10324-kube-api-access-htdtv\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642157 4782 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642165 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.642173 4782 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b31b3d1-1239-45a8-9380-693d4ce10324-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.660063 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8b31b3d1-1239-45a8-9380-693d4ce10324" (UID: "8b31b3d1-1239-45a8-9380-693d4ce10324"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.756010 4782 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b31b3d1-1239-45a8-9380-693d4ce10324-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.993699 4782 generic.go:334] "Generic (PLEG): container finished" podID="72848b25-6c96-4159-898d-4b9e7ee158bc" containerID="c98517cae9b33aaf737abcdc16cd9401952fd08cd8d095e04fd87806d3695b62" exitCode=0 Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.994019 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5mdhg" event={"ID":"72848b25-6c96-4159-898d-4b9e7ee158bc","Type":"ContainerDied","Data":"c98517cae9b33aaf737abcdc16cd9401952fd08cd8d095e04fd87806d3695b62"} Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.997282 4782 generic.go:334] "Generic (PLEG): container finished" podID="763d5f9f-7507-45f0-a872-222bc321e3d4" containerID="b2670481543466180dc7fe65d302766fdb1b1ac1c1064dd47f972a4af682b4d7" exitCode=0 Nov 24 12:12:30 crc kubenswrapper[4782]: I1124 12:12:30.997345 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec86-account-create-2vspw" event={"ID":"763d5f9f-7507-45f0-a872-222bc321e3d4","Type":"ContainerDied","Data":"b2670481543466180dc7fe65d302766fdb1b1ac1c1064dd47f972a4af682b4d7"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.007077 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rz9l9" event={"ID":"0f48cdf3-4a58-4730-a0f1-914a253e7ab1","Type":"ContainerStarted","Data":"3fa5a71e889cae74b41b88c3a4fb820d3131f0439d065fd03337179b749af699"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.007115 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rz9l9" event={"ID":"0f48cdf3-4a58-4730-a0f1-914a253e7ab1","Type":"ContainerStarted","Data":"38e94da5a1bfd6357a653ca61626d8a83287536d1d16400a211bcca9de66e669"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.009022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dnt8l" event={"ID":"8b31b3d1-1239-45a8-9380-693d4ce10324","Type":"ContainerDied","Data":"30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.009174 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d5a413593f8d1b252aaa12bf688de5056d34385859d19ff8ca4a76599a749e" Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.009047 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dnt8l" Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.014652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-49d6-account-create-465ld" event={"ID":"e0e80316-d322-433c-a47c-6f7cfcf1c267","Type":"ContainerStarted","Data":"0f62dc1b7b22e4063937876a88435850ffafa9db3321e67ad2424262f153db1a"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.014696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-49d6-account-create-465ld" event={"ID":"e0e80316-d322-433c-a47c-6f7cfcf1c267","Type":"ContainerStarted","Data":"3b190bace27882cf9ff5c50ea966456399d290e8b531143ef0100de960663c95"} Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.067940 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-rz9l9" podStartSLOduration=2.067917933 podStartE2EDuration="2.067917933s" podCreationTimestamp="2025-11-24 12:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:31.059752494 +0000 UTC m=+1000.303586263" watchObservedRunningTime="2025-11-24 12:12:31.067917933 +0000 UTC m=+1000.311751702" Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.094361 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-49d6-account-create-465ld" podStartSLOduration=2.0943388 podStartE2EDuration="2.0943388s" podCreationTimestamp="2025-11-24 12:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:31.077861129 +0000 UTC m=+1000.321694898" watchObservedRunningTime="2025-11-24 12:12:31.0943388 +0000 UTC m=+1000.338172579" Nov 24 12:12:31 crc kubenswrapper[4782]: I1124 12:12:31.172771 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nftg9"] Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.025825 4782 generic.go:334] "Generic (PLEG): container finished" podID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerID="2b7a8ecd6eae3c7f121e653b6d11687117e4f39b41b193458a93c02b4f52a8da" exitCode=0 Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.026187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerDied","Data":"2b7a8ecd6eae3c7f121e653b6d11687117e4f39b41b193458a93c02b4f52a8da"} Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.033478 4782 generic.go:334] "Generic (PLEG): container finished" podID="e0e80316-d322-433c-a47c-6f7cfcf1c267" containerID="0f62dc1b7b22e4063937876a88435850ffafa9db3321e67ad2424262f153db1a" exitCode=0 Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.033585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-49d6-account-create-465ld" event={"ID":"e0e80316-d322-433c-a47c-6f7cfcf1c267","Type":"ContainerDied","Data":"0f62dc1b7b22e4063937876a88435850ffafa9db3321e67ad2424262f153db1a"} Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.036546 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nftg9" event={"ID":"cce98ec2-7dab-420c-8f56-e80c874419eb","Type":"ContainerStarted","Data":"e42b450259b87d713302a0007a7e594a3d002475f66ae1f9cebe7ee53fc5c7d7"} Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.041899 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f48cdf3-4a58-4730-a0f1-914a253e7ab1" containerID="3fa5a71e889cae74b41b88c3a4fb820d3131f0439d065fd03337179b749af699" exitCode=0 Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.042081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rz9l9" event={"ID":"0f48cdf3-4a58-4730-a0f1-914a253e7ab1","Type":"ContainerDied","Data":"3fa5a71e889cae74b41b88c3a4fb820d3131f0439d065fd03337179b749af699"} Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.513598 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.590906 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.593697 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts\") pod \"763d5f9f-7507-45f0-a872-222bc321e3d4\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.593763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9d98\" (UniqueName: \"kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98\") pod \"763d5f9f-7507-45f0-a872-222bc321e3d4\" (UID: \"763d5f9f-7507-45f0-a872-222bc321e3d4\") " Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.594979 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "763d5f9f-7507-45f0-a872-222bc321e3d4" (UID: "763d5f9f-7507-45f0-a872-222bc321e3d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.601969 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98" (OuterVolumeSpecName: "kube-api-access-l9d98") pod "763d5f9f-7507-45f0-a872-222bc321e3d4" (UID: "763d5f9f-7507-45f0-a872-222bc321e3d4"). InnerVolumeSpecName "kube-api-access-l9d98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.695977 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts\") pod \"72848b25-6c96-4159-898d-4b9e7ee158bc\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.696018 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4zwn\" (UniqueName: \"kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn\") pod \"72848b25-6c96-4159-898d-4b9e7ee158bc\" (UID: \"72848b25-6c96-4159-898d-4b9e7ee158bc\") " Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.696445 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763d5f9f-7507-45f0-a872-222bc321e3d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.696464 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9d98\" (UniqueName: \"kubernetes.io/projected/763d5f9f-7507-45f0-a872-222bc321e3d4-kube-api-access-l9d98\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.696494 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72848b25-6c96-4159-898d-4b9e7ee158bc" (UID: "72848b25-6c96-4159-898d-4b9e7ee158bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.705628 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn" (OuterVolumeSpecName: "kube-api-access-v4zwn") pod "72848b25-6c96-4159-898d-4b9e7ee158bc" (UID: "72848b25-6c96-4159-898d-4b9e7ee158bc"). InnerVolumeSpecName "kube-api-access-v4zwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.798262 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72848b25-6c96-4159-898d-4b9e7ee158bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:32 crc kubenswrapper[4782]: I1124 12:12:32.798658 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4zwn\" (UniqueName: \"kubernetes.io/projected/72848b25-6c96-4159-898d-4b9e7ee158bc-kube-api-access-v4zwn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.055787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerStarted","Data":"e41d3a35f61e13b3a4eb91c530f3ac601e2353c3e2f73b65fef7fbdf005abbd6"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.056005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.060880 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"15abebc8b74d6cd9833a587cbdc85b81f21b52cf86e69e32e4daf33721b9f3ab"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.060924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"0595d7b004d1e784cfbacec9a52405f98021e754dde94a04b96c9a6890649c67"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.060936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"1ebad2450b6dbe52d8576512c5391f6970a773e64ff4217ad7b073b0c33025a3"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.063494 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5mdhg" event={"ID":"72848b25-6c96-4159-898d-4b9e7ee158bc","Type":"ContainerDied","Data":"a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.063531 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ae5b6d57b087aaece204c9d9428da485cdceb4d47a1b3a38a9614d5e870e41" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.063587 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5mdhg" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.080579 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec86-account-create-2vspw" event={"ID":"763d5f9f-7507-45f0-a872-222bc321e3d4","Type":"ContainerDied","Data":"9d0eeb5e2a73f892b34da495c8e0477b0a32bcdd1bd446938f843b93dce7cc92"} Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.080622 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d0eeb5e2a73f892b34da495c8e0477b0a32bcdd1bd446938f843b93dce7cc92" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.080676 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec86-account-create-2vspw" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.105972 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371967.748823 podStartE2EDuration="1m9.105952393s" podCreationTimestamp="2025-11-24 12:11:24 +0000 UTC" firstStartedPulling="2025-11-24 12:11:26.4604449 +0000 UTC m=+935.704278669" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:33.09426943 +0000 UTC m=+1002.338103199" watchObservedRunningTime="2025-11-24 12:12:33.105952393 +0000 UTC m=+1002.349786162" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.531156 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.531721 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.618055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9bv7\" (UniqueName: \"kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7\") pod \"e0e80316-d322-433c-a47c-6f7cfcf1c267\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.619234 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts\") pod \"e0e80316-d322-433c-a47c-6f7cfcf1c267\" (UID: \"e0e80316-d322-433c-a47c-6f7cfcf1c267\") " Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.620263 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr\") pod \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.620407 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts\") pod \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\" (UID: \"0f48cdf3-4a58-4730-a0f1-914a253e7ab1\") " Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.620134 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0e80316-d322-433c-a47c-6f7cfcf1c267" (UID: "e0e80316-d322-433c-a47c-6f7cfcf1c267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.623315 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f48cdf3-4a58-4730-a0f1-914a253e7ab1" (UID: "0f48cdf3-4a58-4730-a0f1-914a253e7ab1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.627704 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7" (OuterVolumeSpecName: "kube-api-access-h9bv7") pod "e0e80316-d322-433c-a47c-6f7cfcf1c267" (UID: "e0e80316-d322-433c-a47c-6f7cfcf1c267"). InnerVolumeSpecName "kube-api-access-h9bv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.629837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr" (OuterVolumeSpecName: "kube-api-access-h84xr") pod "0f48cdf3-4a58-4730-a0f1-914a253e7ab1" (UID: "0f48cdf3-4a58-4730-a0f1-914a253e7ab1"). InnerVolumeSpecName "kube-api-access-h84xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.722152 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9bv7\" (UniqueName: \"kubernetes.io/projected/e0e80316-d322-433c-a47c-6f7cfcf1c267-kube-api-access-h9bv7\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.722195 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e80316-d322-433c-a47c-6f7cfcf1c267-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.722207 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-kube-api-access-h84xr\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:33 crc kubenswrapper[4782]: I1124 12:12:33.722219 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f48cdf3-4a58-4730-a0f1-914a253e7ab1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.093046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rz9l9" event={"ID":"0f48cdf3-4a58-4730-a0f1-914a253e7ab1","Type":"ContainerDied","Data":"38e94da5a1bfd6357a653ca61626d8a83287536d1d16400a211bcca9de66e669"} Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.093329 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e94da5a1bfd6357a653ca61626d8a83287536d1d16400a211bcca9de66e669" Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.093340 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rz9l9" Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.096156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-49d6-account-create-465ld" event={"ID":"e0e80316-d322-433c-a47c-6f7cfcf1c267","Type":"ContainerDied","Data":"3b190bace27882cf9ff5c50ea966456399d290e8b531143ef0100de960663c95"} Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.096179 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b190bace27882cf9ff5c50ea966456399d290e8b531143ef0100de960663c95" Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.096231 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-49d6-account-create-465ld" Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.128402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"a8d133686eb5efdb5ddeb104940b6bf25ae53d2f7662346feeeaa5f6b26e7ddf"} Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.807060 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m6c9b" podUID="a62553ed-d73b-49c8-be06-e9ad0542d8da" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:12:34 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:12:34 crc kubenswrapper[4782]: > Nov 24 12:12:34 crc kubenswrapper[4782]: I1124 12:12:34.833910 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7bqn5" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.042850 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m6c9b-config-bblmp"] Nov 24 12:12:35 crc kubenswrapper[4782]: E1124 12:12:35.043151 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72848b25-6c96-4159-898d-4b9e7ee158bc" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043190 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="72848b25-6c96-4159-898d-4b9e7ee158bc" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: E1124 12:12:35.043198 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763d5f9f-7507-45f0-a872-222bc321e3d4" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043204 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="763d5f9f-7507-45f0-a872-222bc321e3d4" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: E1124 12:12:35.043217 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b31b3d1-1239-45a8-9380-693d4ce10324" containerName="swift-ring-rebalance" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043223 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b31b3d1-1239-45a8-9380-693d4ce10324" containerName="swift-ring-rebalance" Nov 24 12:12:35 crc kubenswrapper[4782]: E1124 12:12:35.043236 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e80316-d322-433c-a47c-6f7cfcf1c267" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043242 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e80316-d322-433c-a47c-6f7cfcf1c267" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: E1124 12:12:35.043260 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f48cdf3-4a58-4730-a0f1-914a253e7ab1" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043268 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f48cdf3-4a58-4730-a0f1-914a253e7ab1" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043434 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b31b3d1-1239-45a8-9380-693d4ce10324" containerName="swift-ring-rebalance" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043450 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="72848b25-6c96-4159-898d-4b9e7ee158bc" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043457 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="763d5f9f-7507-45f0-a872-222bc321e3d4" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043468 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e80316-d322-433c-a47c-6f7cfcf1c267" containerName="mariadb-account-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.043477 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f48cdf3-4a58-4730-a0f1-914a253e7ab1" containerName="mariadb-database-create" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.047539 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.055116 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.069878 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b-config-bblmp"] Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151327 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151382 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151412 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151428 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcth\" (UniqueName: \"kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.151500 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253133 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253179 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253226 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqcth\" (UniqueName: \"kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253267 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253353 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253576 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.253627 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.254164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.255171 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.293523 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqcth\" (UniqueName: \"kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth\") pod \"ovn-controller-m6c9b-config-bblmp\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.370502 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:35 crc kubenswrapper[4782]: I1124 12:12:35.946149 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b-config-bblmp"] Nov 24 12:12:35 crc kubenswrapper[4782]: W1124 12:12:35.960954 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d8b3e9f_aa85_412b_a2ec_0908f1ef3a96.slice/crio-4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1 WatchSource:0}: Error finding container 4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1: Status 404 returned error can't find the container with id 4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1 Nov 24 12:12:36 crc kubenswrapper[4782]: I1124 12:12:36.108785 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 24 12:12:36 crc kubenswrapper[4782]: I1124 12:12:36.144161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-bblmp" event={"ID":"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96","Type":"ContainerStarted","Data":"4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1"} Nov 24 12:12:36 crc kubenswrapper[4782]: I1124 12:12:36.151685 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"7142f727d49324202bae9268f0b4e77c090fe74c4879d47e53e3aa69f6aa216c"} Nov 24 12:12:36 crc kubenswrapper[4782]: I1124 12:12:36.151745 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"ef32bb2aaf88a00340b9f8adf359879b8a24d88cf5d1fe4fb101b8b0a7c5432e"} Nov 24 12:12:36 crc kubenswrapper[4782]: I1124 12:12:36.151758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"ffa4c9bf38b85029e9e5c440afb71af9eb74e89f221b65d51aa868207edcc960"} Nov 24 12:12:37 crc kubenswrapper[4782]: I1124 12:12:37.173617 4782 generic.go:334] "Generic (PLEG): container finished" podID="9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" containerID="9d4d407e5e78e5878d1581fa450294b3cfd8631b292d94ba5495ed5889634558" exitCode=0 Nov 24 12:12:37 crc kubenswrapper[4782]: I1124 12:12:37.173712 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-bblmp" event={"ID":"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96","Type":"ContainerDied","Data":"9d4d407e5e78e5878d1581fa450294b3cfd8631b292d94ba5495ed5889634558"} Nov 24 12:12:37 crc kubenswrapper[4782]: I1124 12:12:37.191497 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"b872432371cbbde4e5253505ffdfb589349919f43ce4e18871da50edae842cd5"} Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.517810 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.619463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.619578 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqcth\" (UniqueName: \"kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.619687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.619707 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run" (OuterVolumeSpecName: "var-run") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.619819 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.620433 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.620538 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.620585 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn\") pod \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\" (UID: \"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96\") " Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621213 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621534 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621546 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621556 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.621567 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.622938 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts" (OuterVolumeSpecName: "scripts") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.644641 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth" (OuterVolumeSpecName: "kube-api-access-kqcth") pod "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" (UID: "9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96"). InnerVolumeSpecName "kube-api-access-kqcth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.722802 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqcth\" (UniqueName: \"kubernetes.io/projected/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-kube-api-access-kqcth\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:38 crc kubenswrapper[4782]: I1124 12:12:38.723065 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.214886 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-bblmp" event={"ID":"9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96","Type":"ContainerDied","Data":"4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1"} Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.215132 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec97c014d565c7f25b8348720296df06a4dfb4f8b1811e31633e257e9edb2c1" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.214905 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-bblmp" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.229518 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"2fb245b4adfa51978f7761db26b8c6e8195c9381b660fac7e3c5eaad1dbb9f24"} Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.229547 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"85d87096361609d39c203fcf29b3534aa852c1f2cc0d2be1688359c2a0d32799"} Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.229578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"0e5234eda19d54653e85aa01be96bfb2f94b065aca6d1c5d8cf5b62067301e2f"} Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.643583 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-m6c9b-config-bblmp"] Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.649624 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-m6c9b-config-bblmp"] Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.779872 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m6c9b-config-m9f4f"] Nov 24 12:12:39 crc kubenswrapper[4782]: E1124 12:12:39.780502 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" containerName="ovn-config" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.780588 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" containerName="ovn-config" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.780872 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" containerName="ovn-config" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.782426 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.786848 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.803234 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b-config-m9f4f"] Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.820259 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-m6c9b" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941356 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941436 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941492 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941589 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl86q\" (UniqueName: \"kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:39 crc kubenswrapper[4782]: I1124 12:12:39.941653 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043315 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043344 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043626 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl86q\" (UniqueName: \"kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043660 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.043674 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.139295 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.140837 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.148547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl86q\" (UniqueName: \"kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q\") pod \"ovn-controller-m6c9b-config-m9f4f\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.243290 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"cb5e034cc54694893a1b108647e1350f7f30c5f25d0496513d301059e42b5bc9"} Nov 24 12:12:40 crc kubenswrapper[4782]: I1124 12:12:40.399744 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:41 crc kubenswrapper[4782]: I1124 12:12:41.057279 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m6c9b-config-m9f4f"] Nov 24 12:12:41 crc kubenswrapper[4782]: I1124 12:12:41.270218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"9272beac71e23ba60b4f0962382bd75be6c00eb58524900d8c35f09114396be5"} Nov 24 12:12:41 crc kubenswrapper[4782]: I1124 12:12:41.272453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-m9f4f" event={"ID":"6bf08827-30cf-433c-a21d-bff880f4c8e5","Type":"ContainerStarted","Data":"66e92f6ce7786f6c471e31da7e8b2a31a745b1a7c8da0751eefc1d2ea42b0c77"} Nov 24 12:12:41 crc kubenswrapper[4782]: I1124 12:12:41.507051 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96" path="/var/lib/kubelet/pods/9d8b3e9f-aa85-412b-a2ec-0908f1ef3a96/volumes" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.289688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"fcb4921fefd9739470e0665c1a8dc632dcdded93c3f83c99947ca248045c5df8"} Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.289731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81dbdeba-8b69-4638-b076-29f9edaeffa6","Type":"ContainerStarted","Data":"393991c8900e04cfee692d30bdfc69c28982dee89fc477668256e449f20bbc59"} Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.292661 4782 generic.go:334] "Generic (PLEG): container finished" podID="6bf08827-30cf-433c-a21d-bff880f4c8e5" containerID="a9678f10945e1b4ebc929226745a5e2a662caa0cb8e0c89afc643b644dfef8cf" exitCode=0 Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.292686 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-m9f4f" event={"ID":"6bf08827-30cf-433c-a21d-bff880f4c8e5","Type":"ContainerDied","Data":"a9678f10945e1b4ebc929226745a5e2a662caa0cb8e0c89afc643b644dfef8cf"} Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.373597 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.987834611 podStartE2EDuration="31.373580721s" podCreationTimestamp="2025-11-24 12:12:11 +0000 UTC" firstStartedPulling="2025-11-24 12:12:29.687013336 +0000 UTC m=+998.930847105" lastFinishedPulling="2025-11-24 12:12:38.072759446 +0000 UTC m=+1007.316593215" observedRunningTime="2025-11-24 12:12:42.367322994 +0000 UTC m=+1011.611156773" watchObservedRunningTime="2025-11-24 12:12:42.373580721 +0000 UTC m=+1011.617414490" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.626391 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.627904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.634704 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.657499 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.685679 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.685728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljf7\" (UniqueName: \"kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.685811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.685837 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.685899 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.686268 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788096 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljf7\" (UniqueName: \"kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788272 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.788298 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.789000 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.791207 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.792055 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.792752 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.795028 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.820722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljf7\" (UniqueName: \"kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7\") pod \"dnsmasq-dns-6d5b6d6b67-m5hq2\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:42 crc kubenswrapper[4782]: I1124 12:12:42.970887 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:12:45 crc kubenswrapper[4782]: I1124 12:12:45.748610 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.107336 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.198451 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f4cxf"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.199675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.238145 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f4cxf"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.309354 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zbnnb"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.310686 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.328130 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zbnnb"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.353308 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjqm\" (UniqueName: \"kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.353442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.367716 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-862f-account-create-wdpn8"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.369248 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.374999 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.413502 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-862f-account-create-wdpn8"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.440499 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b931-account-create-btftx"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.441827 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.446690 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.451923 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b931-account-create-btftx"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.454422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.454553 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjqm\" (UniqueName: \"kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.454600 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r27nd\" (UniqueName: \"kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.454643 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.455466 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.526062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjqm\" (UniqueName: \"kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm\") pod \"cinder-db-create-f4cxf\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.556798 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qhh\" (UniqueName: \"kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.556876 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r27nd\" (UniqueName: \"kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.556912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.556935 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.556984 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcjz\" (UniqueName: \"kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.557006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.558023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.576981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r27nd\" (UniqueName: \"kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd\") pod \"barbican-db-create-zbnnb\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.588109 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b6ctv"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.589367 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.604420 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b6ctv"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.629635 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbnnb" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpbp\" (UniqueName: \"kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658110 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6qhh\" (UniqueName: \"kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658180 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcjz\" (UniqueName: \"kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658287 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.658322 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.659253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.659358 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.688309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6qhh\" (UniqueName: \"kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh\") pod \"barbican-862f-account-create-wdpn8\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.689196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcjz\" (UniqueName: \"kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz\") pod \"cinder-b931-account-create-btftx\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.697195 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fd2c-account-create-kh9w8"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.698351 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.714267 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.715260 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.719754 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd2c-account-create-kh9w8"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.760111 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.761357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5n4\" (UniqueName: \"kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.761501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.761644 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cpbp\" (UniqueName: \"kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.762871 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.767210 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.769889 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bvx8c"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.771304 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.778718 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.778972 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.783080 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.794296 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bh78b" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.805501 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bvx8c"] Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.812180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cpbp\" (UniqueName: \"kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp\") pod \"neutron-db-create-b6ctv\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.826007 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f4cxf" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.863829 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.864162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5n4\" (UniqueName: \"kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.864302 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.864431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.864571 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcbl\" (UniqueName: \"kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.864795 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.886762 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5n4\" (UniqueName: \"kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4\") pod \"neutron-fd2c-account-create-kh9w8\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.943090 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b6ctv" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.965627 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.966352 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.966436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpcbl\" (UniqueName: \"kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.971063 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.971529 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:46 crc kubenswrapper[4782]: I1124 12:12:46.996683 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpcbl\" (UniqueName: \"kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl\") pod \"keystone-db-sync-bvx8c\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:47 crc kubenswrapper[4782]: I1124 12:12:47.107004 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:12:47 crc kubenswrapper[4782]: I1124 12:12:47.150929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.398273 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m6c9b-config-m9f4f" event={"ID":"6bf08827-30cf-433c-a21d-bff880f4c8e5","Type":"ContainerDied","Data":"66e92f6ce7786f6c471e31da7e8b2a31a745b1a7c8da0751eefc1d2ea42b0c77"} Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.399131 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e92f6ce7786f6c471e31da7e8b2a31a745b1a7c8da0751eefc1d2ea42b0c77" Nov 24 12:12:54 crc kubenswrapper[4782]: E1124 12:12:54.438941 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 24 12:12:54 crc kubenswrapper[4782]: E1124 12:12:54.439156 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvf48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-nftg9_openstack(cce98ec2-7dab-420c-8f56-e80c874419eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:12:54 crc kubenswrapper[4782]: E1124 12:12:54.440391 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-nftg9" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.621195 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646360 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl86q\" (UniqueName: \"kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646468 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646602 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646652 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646648 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run" (OuterVolumeSpecName: "var-run") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646765 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646833 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.646891 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts\") pod \"6bf08827-30cf-433c-a21d-bff880f4c8e5\" (UID: \"6bf08827-30cf-433c-a21d-bff880f4c8e5\") " Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.648052 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.648132 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.648147 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.648158 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bf08827-30cf-433c-a21d-bff880f4c8e5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.649172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts" (OuterVolumeSpecName: "scripts") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.668640 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q" (OuterVolumeSpecName: "kube-api-access-zl86q") pod "6bf08827-30cf-433c-a21d-bff880f4c8e5" (UID: "6bf08827-30cf-433c-a21d-bff880f4c8e5"). InnerVolumeSpecName "kube-api-access-zl86q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.749432 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.749459 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf08827-30cf-433c-a21d-bff880f4c8e5-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:54 crc kubenswrapper[4782]: I1124 12:12:54.749470 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl86q\" (UniqueName: \"kubernetes.io/projected/6bf08827-30cf-433c-a21d-bff880f4c8e5-kube-api-access-zl86q\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.104227 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b931-account-create-btftx"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.120345 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.127454 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zbnnb"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.136144 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.233161 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd2c-account-create-kh9w8"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.238499 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.311274 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b6ctv"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.325016 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-862f-account-create-wdpn8"] Nov 24 12:12:55 crc kubenswrapper[4782]: W1124 12:12:55.328724 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a828c03_9f43_48c5_b19f_43a7a1f7f0c6.slice/crio-a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba WatchSource:0}: Error finding container a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba: Status 404 returned error can't find the container with id a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba Nov 24 12:12:55 crc kubenswrapper[4782]: W1124 12:12:55.332346 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e69f3b5_7735_44bd_9a5c_aa6060e04858.slice/crio-7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23 WatchSource:0}: Error finding container 7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23: Status 404 returned error can't find the container with id 7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23 Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.333579 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bvx8c"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.339721 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.344655 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f4cxf"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.408207 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f4cxf" event={"ID":"50c22dba-5e37-409b-b24f-09afa0abeaa8","Type":"ContainerStarted","Data":"4ab4c3263187ca69a35b6a2e4a6440095226e4a0d7b18f97dbb16b3bb5b08833"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.409218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b931-account-create-btftx" event={"ID":"4b761020-fa6c-4ec4-b3d0-5eb939867db4","Type":"ContainerStarted","Data":"a46f1ad047cf36c3b005e337eb35acd7930a1f464fcc02e7036807bf80ca534b"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.409239 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b931-account-create-btftx" event={"ID":"4b761020-fa6c-4ec4-b3d0-5eb939867db4","Type":"ContainerStarted","Data":"c17f61314d9b088fc1ba8fa9903b222e6df6357150381a87615f4806cccb6015"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.414173 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b6ctv" event={"ID":"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6","Type":"ContainerStarted","Data":"a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.414972 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-862f-account-create-wdpn8" event={"ID":"f31274b3-3d46-4f50-b070-3238dba1c066","Type":"ContainerStarted","Data":"87840dbb541d2845b0662f9a6fb198a36a73046f7e8e24426c1e0934144c7c02"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.415832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd2c-account-create-kh9w8" event={"ID":"64e8c831-2986-4488-a513-6fc375b64046","Type":"ContainerStarted","Data":"ba087bf9682616a9ffd2ffc7dd3741a4534850bedba528c6dd5212bbe3d4736a"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.417032 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbnnb" event={"ID":"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11","Type":"ContainerStarted","Data":"3688b8cdfd4b4b2afe67f8c98c1aefdaab981bd82fd599e8b72fea7f4a5853c1"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.417054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbnnb" event={"ID":"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11","Type":"ContainerStarted","Data":"32ee0bcf06a585bd30fd7cca3d92b47113d77dd00d838fdf63baeb39307642b7"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.421636 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bvx8c" event={"ID":"4e69f3b5-7735-44bd-9a5c-aa6060e04858","Type":"ContainerStarted","Data":"7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.423108 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" event={"ID":"68a5d934-59c7-4255-afde-22e3a83cb221","Type":"ContainerStarted","Data":"13807f40940003ea9f95686263883279df862a44ec4626ad0aa09e80bdcaeab5"} Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.423175 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:12:55 crc kubenswrapper[4782]: E1124 12:12:55.424421 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-nftg9" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.436224 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b931-account-create-btftx" podStartSLOduration=9.436204764 podStartE2EDuration="9.436204764s" podCreationTimestamp="2025-11-24 12:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:55.429829524 +0000 UTC m=+1024.673663313" watchObservedRunningTime="2025-11-24 12:12:55.436204764 +0000 UTC m=+1024.680038533" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.480499 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-zbnnb" podStartSLOduration=9.48047797 podStartE2EDuration="9.48047797s" podCreationTimestamp="2025-11-24 12:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:55.476614116 +0000 UTC m=+1024.720447885" watchObservedRunningTime="2025-11-24 12:12:55.48047797 +0000 UTC m=+1024.724311739" Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.726517 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-m6c9b-config-m9f4f"] Nov 24 12:12:55 crc kubenswrapper[4782]: I1124 12:12:55.732139 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-m6c9b-config-m9f4f"] Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.432935 4782 generic.go:334] "Generic (PLEG): container finished" podID="50c22dba-5e37-409b-b24f-09afa0abeaa8" containerID="e002faffcb9d0b649f67252d2070f51ea2188a7606919c374c95f8c0e8f71db6" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.432978 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f4cxf" event={"ID":"50c22dba-5e37-409b-b24f-09afa0abeaa8","Type":"ContainerDied","Data":"e002faffcb9d0b649f67252d2070f51ea2188a7606919c374c95f8c0e8f71db6"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.438207 4782 generic.go:334] "Generic (PLEG): container finished" podID="4b761020-fa6c-4ec4-b3d0-5eb939867db4" containerID="a46f1ad047cf36c3b005e337eb35acd7930a1f464fcc02e7036807bf80ca534b" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.438264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b931-account-create-btftx" event={"ID":"4b761020-fa6c-4ec4-b3d0-5eb939867db4","Type":"ContainerDied","Data":"a46f1ad047cf36c3b005e337eb35acd7930a1f464fcc02e7036807bf80ca534b"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.440148 4782 generic.go:334] "Generic (PLEG): container finished" podID="6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" containerID="29dbc560b3a5095722406d265498a0352d09ae3cf9a5f9459e310c23fbc0b1fa" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.440193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b6ctv" event={"ID":"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6","Type":"ContainerDied","Data":"29dbc560b3a5095722406d265498a0352d09ae3cf9a5f9459e310c23fbc0b1fa"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.442086 4782 generic.go:334] "Generic (PLEG): container finished" podID="f31274b3-3d46-4f50-b070-3238dba1c066" containerID="4931d235129e882b3cc93ba0073bdd7c24594d8dd23b7fbd141c64984fff9b72" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.442126 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-862f-account-create-wdpn8" event={"ID":"f31274b3-3d46-4f50-b070-3238dba1c066","Type":"ContainerDied","Data":"4931d235129e882b3cc93ba0073bdd7c24594d8dd23b7fbd141c64984fff9b72"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.443596 4782 generic.go:334] "Generic (PLEG): container finished" podID="64e8c831-2986-4488-a513-6fc375b64046" containerID="c89d13a523e164b8776c7628cb9ac2682bfe5de3666b4ae219f98cc6b5ee2346" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.443634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd2c-account-create-kh9w8" event={"ID":"64e8c831-2986-4488-a513-6fc375b64046","Type":"ContainerDied","Data":"c89d13a523e164b8776c7628cb9ac2682bfe5de3666b4ae219f98cc6b5ee2346"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.444882 4782 generic.go:334] "Generic (PLEG): container finished" podID="ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" containerID="3688b8cdfd4b4b2afe67f8c98c1aefdaab981bd82fd599e8b72fea7f4a5853c1" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.445598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbnnb" event={"ID":"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11","Type":"ContainerDied","Data":"3688b8cdfd4b4b2afe67f8c98c1aefdaab981bd82fd599e8b72fea7f4a5853c1"} Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.452223 4782 generic.go:334] "Generic (PLEG): container finished" podID="68a5d934-59c7-4255-afde-22e3a83cb221" containerID="63114ac5fc34a1992d4892760e383da11c6c5d2719a5f13e1b0c950b9ddad18f" exitCode=0 Nov 24 12:12:56 crc kubenswrapper[4782]: I1124 12:12:56.452278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" event={"ID":"68a5d934-59c7-4255-afde-22e3a83cb221","Type":"ContainerDied","Data":"63114ac5fc34a1992d4892760e383da11c6c5d2719a5f13e1b0c950b9ddad18f"} Nov 24 12:12:57 crc kubenswrapper[4782]: I1124 12:12:57.463194 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" event={"ID":"68a5d934-59c7-4255-afde-22e3a83cb221","Type":"ContainerStarted","Data":"bee0d71cc34b94370399c08e656319cdde6112b2d05914f70f4775b1047d5d9e"} Nov 24 12:12:57 crc kubenswrapper[4782]: I1124 12:12:57.500058 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podStartSLOduration=15.500033765 podStartE2EDuration="15.500033765s" podCreationTimestamp="2025-11-24 12:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:12:57.486761609 +0000 UTC m=+1026.730595388" watchObservedRunningTime="2025-11-24 12:12:57.500033765 +0000 UTC m=+1026.743867554" Nov 24 12:12:57 crc kubenswrapper[4782]: I1124 12:12:57.519832 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf08827-30cf-433c-a21d-bff880f4c8e5" path="/var/lib/kubelet/pods/6bf08827-30cf-433c-a21d-bff880f4c8e5/volumes" Nov 24 12:12:57 crc kubenswrapper[4782]: I1124 12:12:57.971928 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.101702 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f4cxf" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.111559 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.140799 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.154264 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b6ctv" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161623 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts5n4\" (UniqueName: \"kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4\") pod \"64e8c831-2986-4488-a513-6fc375b64046\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161726 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts\") pod \"50c22dba-5e37-409b-b24f-09afa0abeaa8\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161765 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts\") pod \"f31274b3-3d46-4f50-b070-3238dba1c066\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161798 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts\") pod \"64e8c831-2986-4488-a513-6fc375b64046\" (UID: \"64e8c831-2986-4488-a513-6fc375b64046\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6qhh\" (UniqueName: \"kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh\") pod \"f31274b3-3d46-4f50-b070-3238dba1c066\" (UID: \"f31274b3-3d46-4f50-b070-3238dba1c066\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.161895 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phjqm\" (UniqueName: \"kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm\") pod \"50c22dba-5e37-409b-b24f-09afa0abeaa8\" (UID: \"50c22dba-5e37-409b-b24f-09afa0abeaa8\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.162830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64e8c831-2986-4488-a513-6fc375b64046" (UID: "64e8c831-2986-4488-a513-6fc375b64046"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.162830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f31274b3-3d46-4f50-b070-3238dba1c066" (UID: "f31274b3-3d46-4f50-b070-3238dba1c066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.163896 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50c22dba-5e37-409b-b24f-09afa0abeaa8" (UID: "50c22dba-5e37-409b-b24f-09afa0abeaa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.172752 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm" (OuterVolumeSpecName: "kube-api-access-phjqm") pod "50c22dba-5e37-409b-b24f-09afa0abeaa8" (UID: "50c22dba-5e37-409b-b24f-09afa0abeaa8"). InnerVolumeSpecName "kube-api-access-phjqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.174205 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4" (OuterVolumeSpecName: "kube-api-access-ts5n4") pod "64e8c831-2986-4488-a513-6fc375b64046" (UID: "64e8c831-2986-4488-a513-6fc375b64046"). InnerVolumeSpecName "kube-api-access-ts5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.181674 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh" (OuterVolumeSpecName: "kube-api-access-q6qhh") pod "f31274b3-3d46-4f50-b070-3238dba1c066" (UID: "f31274b3-3d46-4f50-b070-3238dba1c066"). InnerVolumeSpecName "kube-api-access-q6qhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.221605 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.241673 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbnnb" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263160 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cpbp\" (UniqueName: \"kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp\") pod \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263262 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts\") pod \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263287 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r27nd\" (UniqueName: \"kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd\") pod \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263321 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwcjz\" (UniqueName: \"kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz\") pod \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\" (UID: \"4b761020-fa6c-4ec4-b3d0-5eb939867db4\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263339 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts\") pod \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\" (UID: \"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263356 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts\") pod \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\" (UID: \"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6\") " Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263618 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c22dba-5e37-409b-b24f-09afa0abeaa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263631 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31274b3-3d46-4f50-b070-3238dba1c066-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263639 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64e8c831-2986-4488-a513-6fc375b64046-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263654 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6qhh\" (UniqueName: \"kubernetes.io/projected/f31274b3-3d46-4f50-b070-3238dba1c066-kube-api-access-q6qhh\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263664 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phjqm\" (UniqueName: \"kubernetes.io/projected/50c22dba-5e37-409b-b24f-09afa0abeaa8-kube-api-access-phjqm\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.263674 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts5n4\" (UniqueName: \"kubernetes.io/projected/64e8c831-2986-4488-a513-6fc375b64046-kube-api-access-ts5n4\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.264715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" (UID: "6a828c03-9f43-48c5-b19f-43a7a1f7f0c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.265697 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b761020-fa6c-4ec4-b3d0-5eb939867db4" (UID: "4b761020-fa6c-4ec4-b3d0-5eb939867db4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.266024 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd" (OuterVolumeSpecName: "kube-api-access-r27nd") pod "ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" (UID: "ffc8e8d2-4b6f-43e9-9330-b58dfde90a11"). InnerVolumeSpecName "kube-api-access-r27nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.268203 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp" (OuterVolumeSpecName: "kube-api-access-5cpbp") pod "6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" (UID: "6a828c03-9f43-48c5-b19f-43a7a1f7f0c6"). InnerVolumeSpecName "kube-api-access-5cpbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.269059 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz" (OuterVolumeSpecName: "kube-api-access-zwcjz") pod "4b761020-fa6c-4ec4-b3d0-5eb939867db4" (UID: "4b761020-fa6c-4ec4-b3d0-5eb939867db4"). InnerVolumeSpecName "kube-api-access-zwcjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.271715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" (UID: "ffc8e8d2-4b6f-43e9-9330-b58dfde90a11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.365374 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b761020-fa6c-4ec4-b3d0-5eb939867db4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.365684 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r27nd\" (UniqueName: \"kubernetes.io/projected/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-kube-api-access-r27nd\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.365827 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwcjz\" (UniqueName: \"kubernetes.io/projected/4b761020-fa6c-4ec4-b3d0-5eb939867db4-kube-api-access-zwcjz\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.365954 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.366066 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.366172 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cpbp\" (UniqueName: \"kubernetes.io/projected/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6-kube-api-access-5cpbp\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.485956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-862f-account-create-wdpn8" event={"ID":"f31274b3-3d46-4f50-b070-3238dba1c066","Type":"ContainerDied","Data":"87840dbb541d2845b0662f9a6fb198a36a73046f7e8e24426c1e0934144c7c02"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.486297 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87840dbb541d2845b0662f9a6fb198a36a73046f7e8e24426c1e0934144c7c02" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.486483 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-862f-account-create-wdpn8" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.500112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbnnb" event={"ID":"ffc8e8d2-4b6f-43e9-9330-b58dfde90a11","Type":"ContainerDied","Data":"32ee0bcf06a585bd30fd7cca3d92b47113d77dd00d838fdf63baeb39307642b7"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.500802 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ee0bcf06a585bd30fd7cca3d92b47113d77dd00d838fdf63baeb39307642b7" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.500168 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbnnb" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.513046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bvx8c" event={"ID":"4e69f3b5-7735-44bd-9a5c-aa6060e04858","Type":"ContainerStarted","Data":"854752b27206e62ce2345f682908d3037b1348c5f9761b4c21b4ac736d626dc2"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.516289 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f4cxf" event={"ID":"50c22dba-5e37-409b-b24f-09afa0abeaa8","Type":"ContainerDied","Data":"4ab4c3263187ca69a35b6a2e4a6440095226e4a0d7b18f97dbb16b3bb5b08833"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.516323 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab4c3263187ca69a35b6a2e4a6440095226e4a0d7b18f97dbb16b3bb5b08833" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.516418 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f4cxf" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.543470 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bvx8c" podStartSLOduration=9.902647881 podStartE2EDuration="14.543452608s" podCreationTimestamp="2025-11-24 12:12:46 +0000 UTC" firstStartedPulling="2025-11-24 12:12:55.33593127 +0000 UTC m=+1024.579765029" lastFinishedPulling="2025-11-24 12:12:59.976735987 +0000 UTC m=+1029.220569756" observedRunningTime="2025-11-24 12:13:00.537882909 +0000 UTC m=+1029.781716678" watchObservedRunningTime="2025-11-24 12:13:00.543452608 +0000 UTC m=+1029.787286377" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.570989 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b931-account-create-btftx" event={"ID":"4b761020-fa6c-4ec4-b3d0-5eb939867db4","Type":"ContainerDied","Data":"c17f61314d9b088fc1ba8fa9903b222e6df6357150381a87615f4806cccb6015"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.571220 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c17f61314d9b088fc1ba8fa9903b222e6df6357150381a87615f4806cccb6015" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.571024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b931-account-create-btftx" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.573149 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b6ctv" event={"ID":"6a828c03-9f43-48c5-b19f-43a7a1f7f0c6","Type":"ContainerDied","Data":"a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.573222 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a469808e92dcb919b7f7fb5c248f93d258a09b0de732c0979e503451b96b76ba" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.573186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b6ctv" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.575502 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd2c-account-create-kh9w8" event={"ID":"64e8c831-2986-4488-a513-6fc375b64046","Type":"ContainerDied","Data":"ba087bf9682616a9ffd2ffc7dd3741a4534850bedba528c6dd5212bbe3d4736a"} Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.575597 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba087bf9682616a9ffd2ffc7dd3741a4534850bedba528c6dd5212bbe3d4736a" Nov 24 12:13:00 crc kubenswrapper[4782]: I1124 12:13:00.575616 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd2c-account-create-kh9w8" Nov 24 12:13:02 crc kubenswrapper[4782]: I1124 12:13:02.973752 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.044577 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.045051 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="dnsmasq-dns" containerID="cri-o://ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8" gracePeriod=10 Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.493434 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.597298 4782 generic.go:334] "Generic (PLEG): container finished" podID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerID="ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8" exitCode=0 Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.597335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" event={"ID":"4c0393b0-4e2a-449c-ad19-aaddc8017944","Type":"ContainerDied","Data":"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8"} Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.597359 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" event={"ID":"4c0393b0-4e2a-449c-ad19-aaddc8017944","Type":"ContainerDied","Data":"fd68247b75c80ba03e467e32a73c040e049531e77e7b6983e7493fd626375c87"} Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.597394 4782 scope.go:117] "RemoveContainer" containerID="ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.597763 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bs6cm" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.632747 4782 scope.go:117] "RemoveContainer" containerID="f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.654267 4782 scope.go:117] "RemoveContainer" containerID="ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8" Nov 24 12:13:03 crc kubenswrapper[4782]: E1124 12:13:03.656309 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8\": container with ID starting with ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8 not found: ID does not exist" containerID="ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.656352 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8"} err="failed to get container status \"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8\": rpc error: code = NotFound desc = could not find container \"ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8\": container with ID starting with ed7254efe090c234a5df146e49bcee9e2f2dc7c6edfd97e416e7ec092e432fb8 not found: ID does not exist" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.656392 4782 scope.go:117] "RemoveContainer" containerID="f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.656855 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtldd\" (UniqueName: \"kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd\") pod \"4c0393b0-4e2a-449c-ad19-aaddc8017944\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.656947 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb\") pod \"4c0393b0-4e2a-449c-ad19-aaddc8017944\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.657027 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc\") pod \"4c0393b0-4e2a-449c-ad19-aaddc8017944\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.657101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config\") pod \"4c0393b0-4e2a-449c-ad19-aaddc8017944\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.657131 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb\") pod \"4c0393b0-4e2a-449c-ad19-aaddc8017944\" (UID: \"4c0393b0-4e2a-449c-ad19-aaddc8017944\") " Nov 24 12:13:03 crc kubenswrapper[4782]: E1124 12:13:03.660763 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd\": container with ID starting with f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd not found: ID does not exist" containerID="f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.660933 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd"} err="failed to get container status \"f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd\": rpc error: code = NotFound desc = could not find container \"f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd\": container with ID starting with f99044551d75e7d5d884e0fd87ad3a05fa2c980ab5ff13b6020efff2deea05bd not found: ID does not exist" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.663060 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd" (OuterVolumeSpecName: "kube-api-access-dtldd") pod "4c0393b0-4e2a-449c-ad19-aaddc8017944" (UID: "4c0393b0-4e2a-449c-ad19-aaddc8017944"). InnerVolumeSpecName "kube-api-access-dtldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.702200 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config" (OuterVolumeSpecName: "config") pod "4c0393b0-4e2a-449c-ad19-aaddc8017944" (UID: "4c0393b0-4e2a-449c-ad19-aaddc8017944"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.703945 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c0393b0-4e2a-449c-ad19-aaddc8017944" (UID: "4c0393b0-4e2a-449c-ad19-aaddc8017944"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.705461 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c0393b0-4e2a-449c-ad19-aaddc8017944" (UID: "4c0393b0-4e2a-449c-ad19-aaddc8017944"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.707779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c0393b0-4e2a-449c-ad19-aaddc8017944" (UID: "4c0393b0-4e2a-449c-ad19-aaddc8017944"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.758869 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtldd\" (UniqueName: \"kubernetes.io/projected/4c0393b0-4e2a-449c-ad19-aaddc8017944-kube-api-access-dtldd\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.758906 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.758916 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.758925 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.758934 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0393b0-4e2a-449c-ad19-aaddc8017944-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.932501 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:13:03 crc kubenswrapper[4782]: I1124 12:13:03.938949 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bs6cm"] Nov 24 12:13:05 crc kubenswrapper[4782]: I1124 12:13:05.507290 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" path="/var/lib/kubelet/pods/4c0393b0-4e2a-449c-ad19-aaddc8017944/volumes" Nov 24 12:13:05 crc kubenswrapper[4782]: I1124 12:13:05.619699 4782 generic.go:334] "Generic (PLEG): container finished" podID="4e69f3b5-7735-44bd-9a5c-aa6060e04858" containerID="854752b27206e62ce2345f682908d3037b1348c5f9761b4c21b4ac736d626dc2" exitCode=0 Nov 24 12:13:05 crc kubenswrapper[4782]: I1124 12:13:05.619740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bvx8c" event={"ID":"4e69f3b5-7735-44bd-9a5c-aa6060e04858","Type":"ContainerDied","Data":"854752b27206e62ce2345f682908d3037b1348c5f9761b4c21b4ac736d626dc2"} Nov 24 12:13:06 crc kubenswrapper[4782]: I1124 12:13:06.942099 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.126818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle\") pod \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.126954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpcbl\" (UniqueName: \"kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl\") pod \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.127076 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data\") pod \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\" (UID: \"4e69f3b5-7735-44bd-9a5c-aa6060e04858\") " Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.132817 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl" (OuterVolumeSpecName: "kube-api-access-kpcbl") pod "4e69f3b5-7735-44bd-9a5c-aa6060e04858" (UID: "4e69f3b5-7735-44bd-9a5c-aa6060e04858"). InnerVolumeSpecName "kube-api-access-kpcbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.168329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e69f3b5-7735-44bd-9a5c-aa6060e04858" (UID: "4e69f3b5-7735-44bd-9a5c-aa6060e04858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.186781 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data" (OuterVolumeSpecName: "config-data") pod "4e69f3b5-7735-44bd-9a5c-aa6060e04858" (UID: "4e69f3b5-7735-44bd-9a5c-aa6060e04858"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.228920 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpcbl\" (UniqueName: \"kubernetes.io/projected/4e69f3b5-7735-44bd-9a5c-aa6060e04858-kube-api-access-kpcbl\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.228956 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.228968 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e69f3b5-7735-44bd-9a5c-aa6060e04858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.636924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bvx8c" event={"ID":"4e69f3b5-7735-44bd-9a5c-aa6060e04858","Type":"ContainerDied","Data":"7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23"} Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.637331 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9bb0816637efc323ca2ce59a8a780b7e7b792eaead1af132f12c2bbdafea23" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.636989 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bvx8c" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924178 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vb9v4"] Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924508 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf08827-30cf-433c-a21d-bff880f4c8e5" containerName="ovn-config" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924524 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf08827-30cf-433c-a21d-bff880f4c8e5" containerName="ovn-config" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924536 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="dnsmasq-dns" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="dnsmasq-dns" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924552 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e8c831-2986-4488-a513-6fc375b64046" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924559 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e8c831-2986-4488-a513-6fc375b64046" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924574 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="init" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924583 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="init" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924592 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31274b3-3d46-4f50-b070-3238dba1c066" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924598 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31274b3-3d46-4f50-b070-3238dba1c066" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924616 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924621 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924634 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924639 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924648 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c22dba-5e37-409b-b24f-09afa0abeaa8" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924654 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c22dba-5e37-409b-b24f-09afa0abeaa8" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924664 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b761020-fa6c-4ec4-b3d0-5eb939867db4" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924672 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b761020-fa6c-4ec4-b3d0-5eb939867db4" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: E1124 12:13:07.924682 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e69f3b5-7735-44bd-9a5c-aa6060e04858" containerName="keystone-db-sync" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.924689 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e69f3b5-7735-44bd-9a5c-aa6060e04858" containerName="keystone-db-sync" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.926978 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0393b0-4e2a-449c-ad19-aaddc8017944" containerName="dnsmasq-dns" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927028 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31274b3-3d46-4f50-b070-3238dba1c066" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927038 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e8c831-2986-4488-a513-6fc375b64046" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927051 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e69f3b5-7735-44bd-9a5c-aa6060e04858" containerName="keystone-db-sync" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927066 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b761020-fa6c-4ec4-b3d0-5eb939867db4" containerName="mariadb-account-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927076 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927092 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf08827-30cf-433c-a21d-bff880f4c8e5" containerName="ovn-config" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927103 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927111 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c22dba-5e37-409b-b24f-09afa0abeaa8" containerName="mariadb-database-create" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.927778 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.944417 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.944470 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.944592 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.944643 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bh78b" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.944726 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.964462 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.966218 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:07 crc kubenswrapper[4782]: I1124 12:13:07.980209 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb9v4"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.022429 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tjl\" (UniqueName: \"kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045537 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48tj\" (UniqueName: \"kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045638 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045659 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045695 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045745 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.045776 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146327 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146531 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tjl\" (UniqueName: \"kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146618 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146723 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48tj\" (UniqueName: \"kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146751 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.146771 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.147442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.151148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.151752 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.154325 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.156023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.160411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.160959 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.165420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.166022 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.166624 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.249457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tjl\" (UniqueName: \"kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl\") pod \"dnsmasq-dns-6f8c45789f-xqhhv\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.263429 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.272807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48tj\" (UniqueName: \"kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj\") pod \"keystone-bootstrap-vb9v4\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.281462 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.281572 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.290803 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.291074 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.291610 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.291783 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-nfqjc" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.301243 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-c4k2r"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.302259 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.313341 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.321536 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.325877 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.326223 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-58nxl" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.350362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.350640 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.350744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvj4s\" (UniqueName: \"kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.350904 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.350978 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.351003 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.351048 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.351073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8c8g\" (UniqueName: \"kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.358695 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pv8t7"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.371080 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.387286 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.387750 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-958sd" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.387967 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.409548 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c4k2r"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.489467 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pv8t7"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518441 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518508 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmx5\" (UniqueName: \"kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518575 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518605 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518634 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518673 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518695 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8c8g\" (UniqueName: \"kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518721 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518752 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518778 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518799 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvj4s\" (UniqueName: \"kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.518854 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.521022 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.521265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.528047 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.536531 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.549965 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.555765 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dfw9r"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.558538 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.560282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.583755 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kgs76" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.583918 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.621639 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.621896 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc6ng\" (UniqueName: \"kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.621993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622176 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622307 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmx5\" (UniqueName: \"kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622451 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622673 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.622755 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.625774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.627114 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvj4s\" (UniqueName: \"kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s\") pod \"neutron-db-sync-c4k2r\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.648301 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.685367 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.693857 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.697531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dfw9r"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.711218 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.727150 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.740003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.740542 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.740651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc6ng\" (UniqueName: \"kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.754669 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.754999 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.755234 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8c8g\" (UniqueName: \"kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g\") pod \"horizon-84d67f97c7-khfdt\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.761874 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.768181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmx5\" (UniqueName: \"kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5\") pod \"cinder-db-sync-pv8t7\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.772050 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tzz9s"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.777853 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.792560 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.799199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.800517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.800787 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dk5wn" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.800899 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.801534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc6ng\" (UniqueName: \"kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng\") pod \"barbican-db-sync-dfw9r\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.818040 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.820240 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: W1124 12:13:08.820704 4782 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: secrets "ceilometer-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 24 12:13:08 crc kubenswrapper[4782]: E1124 12:13:08.820735 4782 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:13:08 crc kubenswrapper[4782]: W1124 12:13:08.820771 4782 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: secrets "ceilometer-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 24 12:13:08 crc kubenswrapper[4782]: E1124 12:13:08.820783 4782 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.846874 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tzz9s"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.849517 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.864164 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppch\" (UniqueName: \"kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.864308 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.871560 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.891837 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.891934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.915674 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.926072 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.981932 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.983212 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.993725 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994075 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ppch\" (UniqueName: \"kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994141 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994165 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994182 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994202 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994228 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvft7\" (UniqueName: \"kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994253 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6v9\" (UniqueName: \"kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994327 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994346 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994426 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994460 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.993969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.994940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:13:08 crc kubenswrapper[4782]: I1124 12:13:08.996969 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.001481 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.002624 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.015136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.034078 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.064472 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.073553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ppch\" (UniqueName: \"kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch\") pod \"placement-db-sync-tzz9s\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.097576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6fq\" (UniqueName: \"kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.099075 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.099241 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.099367 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.100041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.100295 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.100573 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.101272 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.101366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.101489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.099843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.099979 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.102216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.102790 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.102902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.103005 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.103092 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.103166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.103260 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvft7\" (UniqueName: \"kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.104138 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6v9\" (UniqueName: \"kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.100703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.101233 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.105298 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.113475 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.117579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.138485 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6v9\" (UniqueName: \"kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9\") pod \"horizon-6469ccffdf-mt66q\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.140005 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzz9s" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.175539 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvft7\" (UniqueName: \"kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.236597 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.237693 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6fq\" (UniqueName: \"kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.237744 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.237802 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.237869 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.238004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.238064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.239634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.240260 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.240938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.241791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.242354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.287292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6fq\" (UniqueName: \"kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq\") pod \"dnsmasq-dns-fcfdd6f9f-lqbgf\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.345830 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb9v4"] Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.374724 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.425158 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:09 crc kubenswrapper[4782]: W1124 12:13:09.457965 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568c3257_2728_4979_9321_1520df80abb5.slice/crio-93ed566b42152b8d679a9b151f84f5e03b5b3742585da76c36f45ef00aa222a1 WatchSource:0}: Error finding container 93ed566b42152b8d679a9b151f84f5e03b5b3742585da76c36f45ef00aa222a1: Status 404 returned error can't find the container with id 93ed566b42152b8d679a9b151f84f5e03b5b3742585da76c36f45ef00aa222a1 Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.749238 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9v4" event={"ID":"0d24417a-519f-45a2-a4d1-733fcb35dafe","Type":"ContainerStarted","Data":"6dd98900e07f3302334199d6b0fc849758bfe0b9d54676c6fa6172d89235674f"} Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.750536 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" event={"ID":"568c3257-2728-4979-9321-1520df80abb5","Type":"ContainerStarted","Data":"93ed566b42152b8d679a9b151f84f5e03b5b3742585da76c36f45ef00aa222a1"} Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.797449 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:09 crc kubenswrapper[4782]: W1124 12:13:09.848008 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod293902a5_6b27_494e_a53f_01e29ca865f3.slice/crio-d15bf1a3533816876d40b8084227e0c4d222962851dcffea420c3e805bdab030 WatchSource:0}: Error finding container d15bf1a3533816876d40b8084227e0c4d222962851dcffea420c3e805bdab030: Status 404 returned error can't find the container with id d15bf1a3533816876d40b8084227e0c4d222962851dcffea420c3e805bdab030 Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.862566 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.882012 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:09 crc kubenswrapper[4782]: I1124 12:13:09.984159 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pv8t7"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.011270 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dfw9r"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.031094 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.035524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.044905 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " pod="openstack/ceilometer-0" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.052936 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c4k2r"] Nov 24 12:13:10 crc kubenswrapper[4782]: W1124 12:13:10.063013 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42e30cc3_dd65_45af_82ed_40354098a697.slice/crio-8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd WatchSource:0}: Error finding container 8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd: Status 404 returned error can't find the container with id 8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.108823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.113949 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.160043 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tzz9s"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.189141 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.760982 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9v4" event={"ID":"0d24417a-519f-45a2-a4d1-733fcb35dafe","Type":"ContainerStarted","Data":"0dde0431b5b38235ea5f82b2cd0ee70ec5dcb36654805639b6c376c7f2a96508"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.768891 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pv8t7" event={"ID":"73188696-c109-46f8-985b-6f5e9ef5b787","Type":"ContainerStarted","Data":"c436e5c02c692a0a2ee2b161e62c1e194d14eaef1192bbf2e8e37391ea12f6a0"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.773488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6469ccffdf-mt66q" event={"ID":"7f516dd5-3f25-4f23-9fdb-46a3b92eb753","Type":"ContainerStarted","Data":"29aab8ce9c4d4485855712411fc59f6c7a2c94338e9c9814bde17cdc7b542485"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.787162 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vb9v4" podStartSLOduration=3.78714467 podStartE2EDuration="3.78714467s" podCreationTimestamp="2025-11-24 12:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:13:10.780808956 +0000 UTC m=+1040.024642735" watchObservedRunningTime="2025-11-24 12:13:10.78714467 +0000 UTC m=+1040.030978439" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.790666 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4k2r" event={"ID":"42e30cc3-dd65-45af-82ed-40354098a697","Type":"ContainerStarted","Data":"8edc41d602a47fd694df6ba55cd1e42b6459adf724c2909fc7f6452d04590b73"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.790706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4k2r" event={"ID":"42e30cc3-dd65-45af-82ed-40354098a697","Type":"ContainerStarted","Data":"8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.810018 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.829727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzz9s" event={"ID":"14d52143-6c70-4d37-9829-c6ce79b2b8ee","Type":"ContainerStarted","Data":"9ce0484a17b7b433231ecf3643238481cc9ee8cec3afc133f81dfa9f9e7a7fcb"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.830144 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-c4k2r" podStartSLOduration=2.830122912 podStartE2EDuration="2.830122912s" podCreationTimestamp="2025-11-24 12:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:13:10.819466409 +0000 UTC m=+1040.063300188" watchObservedRunningTime="2025-11-24 12:13:10.830122912 +0000 UTC m=+1040.073956671" Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.832652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84d67f97c7-khfdt" event={"ID":"293902a5-6b27-494e-a53f-01e29ca865f3","Type":"ContainerStarted","Data":"d15bf1a3533816876d40b8084227e0c4d222962851dcffea420c3e805bdab030"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.834052 4782 generic.go:334] "Generic (PLEG): container finished" podID="57488019-5421-4f55-a15f-1012f7504ae7" containerID="5f3363f9757fb3e434946c9971b19eb9baf7f8d6b0e35ef688584e7d3fe03696" exitCode=0 Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.834093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" event={"ID":"57488019-5421-4f55-a15f-1012f7504ae7","Type":"ContainerDied","Data":"5f3363f9757fb3e434946c9971b19eb9baf7f8d6b0e35ef688584e7d3fe03696"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.834113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" event={"ID":"57488019-5421-4f55-a15f-1012f7504ae7","Type":"ContainerStarted","Data":"f36f292bd0eeea1f7cd5f9efec70130256050a09272ae979a88b2a6e810020eb"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.850584 4782 generic.go:334] "Generic (PLEG): container finished" podID="568c3257-2728-4979-9321-1520df80abb5" containerID="834b04d73cf94424554aae9690ff08a52e98de51527f5f36b3b681f171daeeee" exitCode=0 Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.850678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" event={"ID":"568c3257-2728-4979-9321-1520df80abb5","Type":"ContainerDied","Data":"834b04d73cf94424554aae9690ff08a52e98de51527f5f36b3b681f171daeeee"} Nov 24 12:13:10 crc kubenswrapper[4782]: I1124 12:13:10.864924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dfw9r" event={"ID":"4e814aae-c22b-41ff-bf86-0cbe5a766eab","Type":"ContainerStarted","Data":"0c1ac0edae95bc17d63a5c774e059278018804c9f0d1a749586d29177cf2062b"} Nov 24 12:13:10 crc kubenswrapper[4782]: W1124 12:13:10.874367 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33180ffd_5192_4cda_becb_cd323c7bd0ca.slice/crio-e8535721bf83e2f2be10d079d47ce03800cb45f03f46be7790ee07dcf8fe5eae WatchSource:0}: Error finding container e8535721bf83e2f2be10d079d47ce03800cb45f03f46be7790ee07dcf8fe5eae: Status 404 returned error can't find the container with id e8535721bf83e2f2be10d079d47ce03800cb45f03f46be7790ee07dcf8fe5eae Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.193888 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.250441 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.269626 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.279110 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.311024 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.418291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.418364 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.418869 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.418995 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmgs\" (UniqueName: \"kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.419078 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.522430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.522850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.522903 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmgs\" (UniqueName: \"kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.522952 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.523115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.536118 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.536190 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.538807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.540320 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.565512 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmgs\" (UniqueName: \"kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs\") pod \"horizon-64f4f7f487-9w4rs\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.592649 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tjl\" (UniqueName: \"kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727243 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727318 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727393 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727445 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.727465 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0\") pod \"568c3257-2728-4979-9321-1520df80abb5\" (UID: \"568c3257-2728-4979-9321-1520df80abb5\") " Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.740605 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl" (OuterVolumeSpecName: "kube-api-access-l5tjl") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "kube-api-access-l5tjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.761060 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.769876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.771858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config" (OuterVolumeSpecName: "config") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.772937 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.777065 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "568c3257-2728-4979-9321-1520df80abb5" (UID: "568c3257-2728-4979-9321-1520df80abb5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831168 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831205 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831215 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tjl\" (UniqueName: \"kubernetes.io/projected/568c3257-2728-4979-9321-1520df80abb5-kube-api-access-l5tjl\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831225 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831239 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.831246 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568c3257-2728-4979-9321-1520df80abb5-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.843017 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.922722 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" event={"ID":"57488019-5421-4f55-a15f-1012f7504ae7","Type":"ContainerStarted","Data":"87948aefb64001a653f8aaa922236adbb65aae9f2b6f78c29163eaab77a9faaa"} Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.923008 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.927176 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerStarted","Data":"e8535721bf83e2f2be10d079d47ce03800cb45f03f46be7790ee07dcf8fe5eae"} Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.946768 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" podStartSLOduration=3.946753609 podStartE2EDuration="3.946753609s" podCreationTimestamp="2025-11-24 12:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:13:11.944111417 +0000 UTC m=+1041.187945186" watchObservedRunningTime="2025-11-24 12:13:11.946753609 +0000 UTC m=+1041.190587378" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.951754 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.951825 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-xqhhv" event={"ID":"568c3257-2728-4979-9321-1520df80abb5","Type":"ContainerDied","Data":"93ed566b42152b8d679a9b151f84f5e03b5b3742585da76c36f45ef00aa222a1"} Nov 24 12:13:11 crc kubenswrapper[4782]: I1124 12:13:11.951884 4782 scope.go:117] "RemoveContainer" containerID="834b04d73cf94424554aae9690ff08a52e98de51527f5f36b3b681f171daeeee" Nov 24 12:13:12 crc kubenswrapper[4782]: I1124 12:13:12.040700 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:12 crc kubenswrapper[4782]: I1124 12:13:12.053767 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-xqhhv"] Nov 24 12:13:13 crc kubenswrapper[4782]: I1124 12:13:12.978585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nftg9" event={"ID":"cce98ec2-7dab-420c-8f56-e80c874419eb","Type":"ContainerStarted","Data":"0eb17af01c443abb5d317744ebbeeb45b2c7f9f955228287a39c10c5e37781d9"} Nov 24 12:13:13 crc kubenswrapper[4782]: I1124 12:13:12.996302 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nftg9" podStartSLOduration=4.07640758 podStartE2EDuration="43.996288662s" podCreationTimestamp="2025-11-24 12:12:29 +0000 UTC" firstStartedPulling="2025-11-24 12:12:31.186645641 +0000 UTC m=+1000.430479410" lastFinishedPulling="2025-11-24 12:13:11.106526723 +0000 UTC m=+1040.350360492" observedRunningTime="2025-11-24 12:13:12.994966135 +0000 UTC m=+1042.238799904" watchObservedRunningTime="2025-11-24 12:13:12.996288662 +0000 UTC m=+1042.240122441" Nov 24 12:13:13 crc kubenswrapper[4782]: I1124 12:13:13.503744 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568c3257-2728-4979-9321-1520df80abb5" path="/var/lib/kubelet/pods/568c3257-2728-4979-9321-1520df80abb5/volumes" Nov 24 12:13:13 crc kubenswrapper[4782]: I1124 12:13:13.567185 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:13 crc kubenswrapper[4782]: I1124 12:13:13.999140 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64f4f7f487-9w4rs" event={"ID":"6ecd622f-0bee-4743-9e2a-c70445333aac","Type":"ContainerStarted","Data":"9f51ed8eeec05b10c4d8d6c0a86c92fcc8e35560ff76ebd78c2f1300b1bfdd33"} Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.253034 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.310227 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:13:17 crc kubenswrapper[4782]: E1124 12:13:17.314871 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568c3257-2728-4979-9321-1520df80abb5" containerName="init" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.315116 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="568c3257-2728-4979-9321-1520df80abb5" containerName="init" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.315478 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="568c3257-2728-4979-9321-1520df80abb5" containerName="init" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.316817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.325731 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.348016 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.411460 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.440981 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6574f9bb76-jkv6h"] Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.442291 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469437 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469463 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469490 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469524 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.469607 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn4sp\" (UniqueName: \"kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.472770 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6574f9bb76-jkv6h"] Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.570728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjlz\" (UniqueName: \"kubernetes.io/projected/41a8247d-b0d2-4a46-b108-bc260db36e11-kube-api-access-vmjlz\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.570868 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.570923 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.570957 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-config-data\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.570979 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn4sp\" (UniqueName: \"kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-tls-certs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571156 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a8247d-b0d2-4a46-b108-bc260db36e11-logs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571185 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571213 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-scripts\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571274 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-secret-key\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571311 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-combined-ca-bundle\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.571904 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.575682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.575990 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.594920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.594961 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.597399 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.606937 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn4sp\" (UniqueName: \"kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp\") pod \"horizon-8684f6cd6d-mwlp6\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.659278 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672458 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a8247d-b0d2-4a46-b108-bc260db36e11-logs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672517 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-scripts\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-secret-key\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672591 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-combined-ca-bundle\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672626 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmjlz\" (UniqueName: \"kubernetes.io/projected/41a8247d-b0d2-4a46-b108-bc260db36e11-kube-api-access-vmjlz\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672665 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-config-data\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.672728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-tls-certs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.674284 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a8247d-b0d2-4a46-b108-bc260db36e11-logs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.675011 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-scripts\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.675429 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a8247d-b0d2-4a46-b108-bc260db36e11-config-data\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.678005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-tls-certs\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.681002 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-combined-ca-bundle\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.681811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a8247d-b0d2-4a46-b108-bc260db36e11-horizon-secret-key\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.700556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmjlz\" (UniqueName: \"kubernetes.io/projected/41a8247d-b0d2-4a46-b108-bc260db36e11-kube-api-access-vmjlz\") pod \"horizon-6574f9bb76-jkv6h\" (UID: \"41a8247d-b0d2-4a46-b108-bc260db36e11\") " pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:17 crc kubenswrapper[4782]: I1124 12:13:17.764659 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:18 crc kubenswrapper[4782]: I1124 12:13:18.185314 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:13:18 crc kubenswrapper[4782]: I1124 12:13:18.480026 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6574f9bb76-jkv6h"] Nov 24 12:13:19 crc kubenswrapper[4782]: I1124 12:13:19.428553 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:13:19 crc kubenswrapper[4782]: I1124 12:13:19.536604 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:13:19 crc kubenswrapper[4782]: I1124 12:13:19.536865 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" containerID="cri-o://bee0d71cc34b94370399c08e656319cdde6112b2d05914f70f4775b1047d5d9e" gracePeriod=10 Nov 24 12:13:21 crc kubenswrapper[4782]: I1124 12:13:21.086080 4782 generic.go:334] "Generic (PLEG): container finished" podID="68a5d934-59c7-4255-afde-22e3a83cb221" containerID="bee0d71cc34b94370399c08e656319cdde6112b2d05914f70f4775b1047d5d9e" exitCode=0 Nov 24 12:13:21 crc kubenswrapper[4782]: I1124 12:13:21.086163 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" event={"ID":"68a5d934-59c7-4255-afde-22e3a83cb221","Type":"ContainerDied","Data":"bee0d71cc34b94370399c08e656319cdde6112b2d05914f70f4775b1047d5d9e"} Nov 24 12:13:22 crc kubenswrapper[4782]: I1124 12:13:22.099111 4782 generic.go:334] "Generic (PLEG): container finished" podID="0d24417a-519f-45a2-a4d1-733fcb35dafe" containerID="0dde0431b5b38235ea5f82b2cd0ee70ec5dcb36654805639b6c376c7f2a96508" exitCode=0 Nov 24 12:13:22 crc kubenswrapper[4782]: I1124 12:13:22.099361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9v4" event={"ID":"0d24417a-519f-45a2-a4d1-733fcb35dafe","Type":"ContainerDied","Data":"0dde0431b5b38235ea5f82b2cd0ee70ec5dcb36654805639b6c376c7f2a96508"} Nov 24 12:13:22 crc kubenswrapper[4782]: I1124 12:13:22.972359 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 24 12:13:24 crc kubenswrapper[4782]: E1124 12:13:24.229162 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 24 12:13:24 crc kubenswrapper[4782]: E1124 12:13:24.229924 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ppch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tzz9s_openstack(14d52143-6c70-4d37-9829-c6ce79b2b8ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:24 crc kubenswrapper[4782]: E1124 12:13:24.231219 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tzz9s" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" Nov 24 12:13:25 crc kubenswrapper[4782]: E1124 12:13:25.129881 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tzz9s" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" Nov 24 12:13:25 crc kubenswrapper[4782]: I1124 12:13:25.562764 4782 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6bf08827-30cf-433c-a21d-bff880f4c8e5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6bf08827-30cf-433c-a21d-bff880f4c8e5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6bf08827_30cf_433c_a21d_bff880f4c8e5.slice" Nov 24 12:13:25 crc kubenswrapper[4782]: E1124 12:13:25.562842 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod6bf08827-30cf-433c-a21d-bff880f4c8e5] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod6bf08827-30cf-433c-a21d-bff880f4c8e5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6bf08827_30cf_433c_a21d_bff880f4c8e5.slice" pod="openstack/ovn-controller-m6c9b-config-m9f4f" podUID="6bf08827-30cf-433c-a21d-bff880f4c8e5" Nov 24 12:13:26 crc kubenswrapper[4782]: I1124 12:13:26.132331 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m6c9b-config-m9f4f" Nov 24 12:13:27 crc kubenswrapper[4782]: I1124 12:13:27.972340 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 24 12:13:32 crc kubenswrapper[4782]: I1124 12:13:32.972541 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 24 12:13:32 crc kubenswrapper[4782]: I1124 12:13:32.973314 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.842771 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.843444 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n569h5fch58bh657hd7h649h5ffhdh7ch655h657hb4h69h77hf5hfh699h5dbh554h68ch575h7h557h5h65dh64chdfhb5h98h5b9h677h7bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8c8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-84d67f97c7-khfdt_openstack(293902a5-6b27-494e-a53f-01e29ca865f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.845567 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-84d67f97c7-khfdt" podUID="293902a5-6b27-494e-a53f-01e29ca865f3" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.925671 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.925833 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9h585hd5h68bh67fh588h7hb6hfh59chcch578h598h7ch5c5h8fh9dh5cbh59ch556h655h5f7h5cdh5f4h66fh66fh5fh5f4h674h76h5dfh5fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5q6v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6469ccffdf-mt66q_openstack(7f516dd5-3f25-4f23-9fdb-46a3b92eb753): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:33 crc kubenswrapper[4782]: E1124 12:13:33.928174 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6469ccffdf-mt66q" podUID="7f516dd5-3f25-4f23-9fdb-46a3b92eb753" Nov 24 12:13:35 crc kubenswrapper[4782]: W1124 12:13:35.682533 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41a8247d_b0d2_4a46_b108_bc260db36e11.slice/crio-6cdd6db14f258a39fe11e39d22e82acd563a5a621c4c346ef77ef38ebd9d96b3 WatchSource:0}: Error finding container 6cdd6db14f258a39fe11e39d22e82acd563a5a621c4c346ef77ef38ebd9d96b3: Status 404 returned error can't find the container with id 6cdd6db14f258a39fe11e39d22e82acd563a5a621c4c346ef77ef38ebd9d96b3 Nov 24 12:13:35 crc kubenswrapper[4782]: E1124 12:13:35.700498 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:13:35 crc kubenswrapper[4782]: E1124 12:13:35.700674 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h678h8dh5cbh678h5cdhcfh677h7h5dh55dh66ch64ch5ddh655hcbh576hf7hd7h5cdh5bbh59fh66bhbdh585h684h5f5h56bh67dh58ch5f9h657q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-64f4f7f487-9w4rs_openstack(6ecd622f-0bee-4743-9e2a-c70445333aac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:35 crc kubenswrapper[4782]: E1124 12:13:35.703643 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-64f4f7f487-9w4rs" podUID="6ecd622f-0bee-4743-9e2a-c70445333aac" Nov 24 12:13:36 crc kubenswrapper[4782]: I1124 12:13:36.226657 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerStarted","Data":"6cdd6db14f258a39fe11e39d22e82acd563a5a621c4c346ef77ef38ebd9d96b3"} Nov 24 12:13:42 crc kubenswrapper[4782]: I1124 12:13:42.972641 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 24 12:13:47 crc kubenswrapper[4782]: I1124 12:13:47.973710 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 24 12:13:49 crc kubenswrapper[4782]: W1124 12:13:49.967941 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cd757b_7259_4caf_b928_2dc936c99028.slice/crio-53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe WatchSource:0}: Error finding container 53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe: Status 404 returned error can't find the container with id 53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe Nov 24 12:13:50 crc kubenswrapper[4782]: I1124 12:13:50.362074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerStarted","Data":"53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe"} Nov 24 12:13:51 crc kubenswrapper[4782]: E1124 12:13:51.175578 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 12:13:51 crc kubenswrapper[4782]: E1124 12:13:51.176027 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc6ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dfw9r_openstack(4e814aae-c22b-41ff-bf86-0cbe5a766eab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:51 crc kubenswrapper[4782]: E1124 12:13:51.177218 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dfw9r" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.211538 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.218342 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.234939 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.235406 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.241788 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts\") pod \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.241845 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts\") pod \"293902a5-6b27-494e-a53f-01e29ca865f3\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.241873 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.241898 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key\") pod \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.241990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q6v9\" (UniqueName: \"kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9\") pod \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242021 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242041 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242069 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48tj\" (UniqueName: \"kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242094 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljf7\" (UniqueName: \"kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242117 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242145 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242176 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data\") pod \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242198 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data\") pod \"293902a5-6b27-494e-a53f-01e29ca865f3\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key\") pod \"293902a5-6b27-494e-a53f-01e29ca865f3\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242272 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8c8g\" (UniqueName: \"kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g\") pod \"293902a5-6b27-494e-a53f-01e29ca865f3\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242303 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs\") pod \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\" (UID: \"7f516dd5-3f25-4f23-9fdb-46a3b92eb753\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242326 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242354 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs\") pod \"293902a5-6b27-494e-a53f-01e29ca865f3\" (UID: \"293902a5-6b27-494e-a53f-01e29ca865f3\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242467 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb\") pod \"68a5d934-59c7-4255-afde-22e3a83cb221\" (UID: \"68a5d934-59c7-4255-afde-22e3a83cb221\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.242510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts\") pod \"0d24417a-519f-45a2-a4d1-733fcb35dafe\" (UID: \"0d24417a-519f-45a2-a4d1-733fcb35dafe\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.257491 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs" (OuterVolumeSpecName: "logs") pod "293902a5-6b27-494e-a53f-01e29ca865f3" (UID: "293902a5-6b27-494e-a53f-01e29ca865f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.257660 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts" (OuterVolumeSpecName: "scripts") pod "293902a5-6b27-494e-a53f-01e29ca865f3" (UID: "293902a5-6b27-494e-a53f-01e29ca865f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.259281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g" (OuterVolumeSpecName: "kube-api-access-b8c8g") pod "293902a5-6b27-494e-a53f-01e29ca865f3" (UID: "293902a5-6b27-494e-a53f-01e29ca865f3"). InnerVolumeSpecName "kube-api-access-b8c8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.259514 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts" (OuterVolumeSpecName: "scripts") pod "7f516dd5-3f25-4f23-9fdb-46a3b92eb753" (UID: "7f516dd5-3f25-4f23-9fdb-46a3b92eb753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.259547 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9" (OuterVolumeSpecName: "kube-api-access-5q6v9") pod "7f516dd5-3f25-4f23-9fdb-46a3b92eb753" (UID: "7f516dd5-3f25-4f23-9fdb-46a3b92eb753"). InnerVolumeSpecName "kube-api-access-5q6v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.259673 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.262501 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data" (OuterVolumeSpecName: "config-data") pod "293902a5-6b27-494e-a53f-01e29ca865f3" (UID: "293902a5-6b27-494e-a53f-01e29ca865f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.265931 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data" (OuterVolumeSpecName: "config-data") pod "7f516dd5-3f25-4f23-9fdb-46a3b92eb753" (UID: "7f516dd5-3f25-4f23-9fdb-46a3b92eb753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.270718 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs" (OuterVolumeSpecName: "logs") pod "7f516dd5-3f25-4f23-9fdb-46a3b92eb753" (UID: "7f516dd5-3f25-4f23-9fdb-46a3b92eb753"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.276550 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "293902a5-6b27-494e-a53f-01e29ca865f3" (UID: "293902a5-6b27-494e-a53f-01e29ca865f3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.277327 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts" (OuterVolumeSpecName: "scripts") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.283584 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj" (OuterVolumeSpecName: "kube-api-access-s48tj") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "kube-api-access-s48tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.283653 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7" (OuterVolumeSpecName: "kube-api-access-dljf7") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "kube-api-access-dljf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.337596 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344208 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344245 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344260 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q6v9\" (UniqueName: \"kubernetes.io/projected/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-kube-api-access-5q6v9\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344274 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344285 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48tj\" (UniqueName: \"kubernetes.io/projected/0d24417a-519f-45a2-a4d1-733fcb35dafe-kube-api-access-s48tj\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344299 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljf7\" (UniqueName: \"kubernetes.io/projected/68a5d934-59c7-4255-afde-22e3a83cb221-kube-api-access-dljf7\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344312 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344345 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/293902a5-6b27-494e-a53f-01e29ca865f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344357 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/293902a5-6b27-494e-a53f-01e29ca865f3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.344368 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8c8g\" (UniqueName: \"kubernetes.io/projected/293902a5-6b27-494e-a53f-01e29ca865f3-kube-api-access-b8c8g\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.353847 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.353891 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/293902a5-6b27-494e-a53f-01e29ca865f3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.353902 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.354616 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.355096 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7f516dd5-3f25-4f23-9fdb-46a3b92eb753" (UID: "7f516dd5-3f25-4f23-9fdb-46a3b92eb753"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.384740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.399925 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.400519 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" event={"ID":"68a5d934-59c7-4255-afde-22e3a83cb221","Type":"ContainerDied","Data":"13807f40940003ea9f95686263883279df862a44ec4626ad0aa09e80bdcaeab5"} Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.400559 4782 scope.go:117] "RemoveContainer" containerID="bee0d71cc34b94370399c08e656319cdde6112b2d05914f70f4775b1047d5d9e" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.407600 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84d67f97c7-khfdt" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.408590 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84d67f97c7-khfdt" event={"ID":"293902a5-6b27-494e-a53f-01e29ca865f3","Type":"ContainerDied","Data":"d15bf1a3533816876d40b8084227e0c4d222962851dcffea420c3e805bdab030"} Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.428448 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64f4f7f487-9w4rs" event={"ID":"6ecd622f-0bee-4743-9e2a-c70445333aac","Type":"ContainerDied","Data":"9f51ed8eeec05b10c4d8d6c0a86c92fcc8e35560ff76ebd78c2f1300b1bfdd33"} Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.428530 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4f7f487-9w4rs" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.433273 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.436017 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6469ccffdf-mt66q" event={"ID":"7f516dd5-3f25-4f23-9fdb-46a3b92eb753","Type":"ContainerDied","Data":"29aab8ce9c4d4485855712411fc59f6c7a2c94338e9c9814bde17cdc7b542485"} Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.436111 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6469ccffdf-mt66q" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.439922 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9v4" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.440063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9v4" event={"ID":"0d24417a-519f-45a2-a4d1-733fcb35dafe","Type":"ContainerDied","Data":"6dd98900e07f3302334199d6b0fc849758bfe0b9d54676c6fa6172d89235674f"} Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.440085 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd98900e07f3302334199d6b0fc849758bfe0b9d54676c6fa6172d89235674f" Nov 24 12:13:51 crc kubenswrapper[4782]: E1124 12:13:51.444017 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-dfw9r" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.455516 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data" (OuterVolumeSpecName: "config-data") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.456472 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data\") pod \"6ecd622f-0bee-4743-9e2a-c70445333aac\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.456532 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts\") pod \"6ecd622f-0bee-4743-9e2a-c70445333aac\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.456598 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lmgs\" (UniqueName: \"kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs\") pod \"6ecd622f-0bee-4743-9e2a-c70445333aac\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.456949 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs\") pod \"6ecd622f-0bee-4743-9e2a-c70445333aac\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.457049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key\") pod \"6ecd622f-0bee-4743-9e2a-c70445333aac\" (UID: \"6ecd622f-0bee-4743-9e2a-c70445333aac\") " Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.457488 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data" (OuterVolumeSpecName: "config-data") pod "6ecd622f-0bee-4743-9e2a-c70445333aac" (UID: "6ecd622f-0bee-4743-9e2a-c70445333aac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.457924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts" (OuterVolumeSpecName: "scripts") pod "6ecd622f-0bee-4743-9e2a-c70445333aac" (UID: "6ecd622f-0bee-4743-9e2a-c70445333aac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.462077 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs" (OuterVolumeSpecName: "logs") pod "6ecd622f-0bee-4743-9e2a-c70445333aac" (UID: "6ecd622f-0bee-4743-9e2a-c70445333aac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.469343 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.471461 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs" (OuterVolumeSpecName: "kube-api-access-6lmgs") pod "6ecd622f-0bee-4743-9e2a-c70445333aac" (UID: "6ecd622f-0bee-4743-9e2a-c70445333aac"). InnerVolumeSpecName "kube-api-access-6lmgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.477722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d24417a-519f-45a2-a4d1-733fcb35dafe" (UID: "0d24417a-519f-45a2-a4d1-733fcb35dafe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.478064 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config" (OuterVolumeSpecName: "config") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.487188 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68a5d934-59c7-4255-afde-22e3a83cb221" (UID: "68a5d934-59c7-4255-afde-22e3a83cb221"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.491890 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6ecd622f-0bee-4743-9e2a-c70445333aac" (UID: "6ecd622f-0bee-4743-9e2a-c70445333aac"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.497462 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ecd622f-0bee-4743-9e2a-c70445333aac-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.498094 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.500536 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f516dd5-3f25-4f23-9fdb-46a3b92eb753-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505347 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505393 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505405 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505414 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ecd622f-0bee-4743-9e2a-c70445333aac-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505422 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505432 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505440 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lmgs\" (UniqueName: \"kubernetes.io/projected/6ecd622f-0bee-4743-9e2a-c70445333aac-kube-api-access-6lmgs\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505448 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecd622f-0bee-4743-9e2a-c70445333aac-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505462 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505471 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d24417a-519f-45a2-a4d1-733fcb35dafe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.505479 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a5d934-59c7-4255-afde-22e3a83cb221-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.540583 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.560077 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84d67f97c7-khfdt"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.580251 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.588204 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6469ccffdf-mt66q"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.722121 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.728171 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-m5hq2"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.783058 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:51 crc kubenswrapper[4782]: I1124 12:13:51.790720 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64f4f7f487-9w4rs"] Nov 24 12:13:52 crc kubenswrapper[4782]: E1124 12:13:52.323347 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 12:13:52 crc kubenswrapper[4782]: E1124 12:13:52.323869 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9h647h554h5b9h5fbh9h58dh5b9h5c7h5d4h5fch95hd8h84h64fhbh544h5dbh567hb7h675h5d4h65dh556h9fh6fh558h576h644h578h665h586q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvft7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(33180ffd-5192-4cda-becb-cd323c7bd0ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.464183 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vb9v4"] Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.471558 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vb9v4"] Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.545735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h9mzc"] Nov 24 12:13:52 crc kubenswrapper[4782]: E1124 12:13:52.546158 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="init" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.546181 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="init" Nov 24 12:13:52 crc kubenswrapper[4782]: E1124 12:13:52.546216 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.546226 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" Nov 24 12:13:52 crc kubenswrapper[4782]: E1124 12:13:52.546256 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d24417a-519f-45a2-a4d1-733fcb35dafe" containerName="keystone-bootstrap" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.546268 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d24417a-519f-45a2-a4d1-733fcb35dafe" containerName="keystone-bootstrap" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.546482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d24417a-519f-45a2-a4d1-733fcb35dafe" containerName="keystone-bootstrap" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.546524 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.547233 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.550653 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.550701 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.550653 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bh78b" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.550898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.551062 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.561040 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9mzc"] Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.624729 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.624838 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.624872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xnk\" (UniqueName: \"kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.624915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.625000 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.625028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726721 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726820 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42xnk\" (UniqueName: \"kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.726980 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.734094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.746514 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.746950 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.747255 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.748481 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xnk\" (UniqueName: \"kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.749658 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data\") pod \"keystone-bootstrap-h9mzc\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.864043 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:13:52 crc kubenswrapper[4782]: I1124 12:13:52.974651 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-m5hq2" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 24 12:13:53 crc kubenswrapper[4782]: I1124 12:13:53.500708 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d24417a-519f-45a2-a4d1-733fcb35dafe" path="/var/lib/kubelet/pods/0d24417a-519f-45a2-a4d1-733fcb35dafe/volumes" Nov 24 12:13:53 crc kubenswrapper[4782]: I1124 12:13:53.502244 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="293902a5-6b27-494e-a53f-01e29ca865f3" path="/var/lib/kubelet/pods/293902a5-6b27-494e-a53f-01e29ca865f3/volumes" Nov 24 12:13:53 crc kubenswrapper[4782]: I1124 12:13:53.502675 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a5d934-59c7-4255-afde-22e3a83cb221" path="/var/lib/kubelet/pods/68a5d934-59c7-4255-afde-22e3a83cb221/volumes" Nov 24 12:13:53 crc kubenswrapper[4782]: I1124 12:13:53.504198 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ecd622f-0bee-4743-9e2a-c70445333aac" path="/var/lib/kubelet/pods/6ecd622f-0bee-4743-9e2a-c70445333aac/volumes" Nov 24 12:13:53 crc kubenswrapper[4782]: I1124 12:13:53.546285 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f516dd5-3f25-4f23-9fdb-46a3b92eb753" path="/var/lib/kubelet/pods/7f516dd5-3f25-4f23-9fdb-46a3b92eb753/volumes" Nov 24 12:13:54 crc kubenswrapper[4782]: E1124 12:13:54.050010 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 12:13:54 crc kubenswrapper[4782]: E1124 12:13:54.050468 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lmx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pv8t7_openstack(73188696-c109-46f8-985b-6f5e9ef5b787): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:13:54 crc kubenswrapper[4782]: E1124 12:13:54.051782 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pv8t7" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" Nov 24 12:13:54 crc kubenswrapper[4782]: I1124 12:13:54.077576 4782 scope.go:117] "RemoveContainer" containerID="63114ac5fc34a1992d4892760e383da11c6c5d2719a5f13e1b0c950b9ddad18f" Nov 24 12:13:54 crc kubenswrapper[4782]: E1124 12:13:54.474629 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pv8t7" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" Nov 24 12:13:54 crc kubenswrapper[4782]: I1124 12:13:54.596238 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9mzc"] Nov 24 12:13:54 crc kubenswrapper[4782]: W1124 12:13:54.606258 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19dc64c0_2cc8_4721_bb12_8723e6e6c6dd.slice/crio-de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa WatchSource:0}: Error finding container de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa: Status 404 returned error can't find the container with id de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.486552 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerStarted","Data":"bdbf22143a2ba2489e7ebdd061e1838272cd083ee4699b46807abd58e4d09a6b"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.486622 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerStarted","Data":"a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.488076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9mzc" event={"ID":"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd","Type":"ContainerStarted","Data":"0ea2bc84c459d47ee1feae9ea9546d4f173bb942ae5e3a16e21caf912b056c8f"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.488113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9mzc" event={"ID":"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd","Type":"ContainerStarted","Data":"de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.489561 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzz9s" event={"ID":"14d52143-6c70-4d37-9829-c6ce79b2b8ee","Type":"ContainerStarted","Data":"6d51d8793fb378702d4a1b4e38bab1e7c00a0bcee1e88b8045d2a0a11797e248"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.499509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerStarted","Data":"4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.499557 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerStarted","Data":"64cbbdb567acf5e868c8f354beb70b099e03307cd06d11f8948a9b2d2ca6c089"} Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.508765 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6574f9bb76-jkv6h" podStartSLOduration=20.022945318 podStartE2EDuration="38.508747401s" podCreationTimestamp="2025-11-24 12:13:17 +0000 UTC" firstStartedPulling="2025-11-24 12:13:35.685874399 +0000 UTC m=+1064.929708168" lastFinishedPulling="2025-11-24 12:13:54.171676482 +0000 UTC m=+1083.415510251" observedRunningTime="2025-11-24 12:13:55.505516932 +0000 UTC m=+1084.749350711" watchObservedRunningTime="2025-11-24 12:13:55.508747401 +0000 UTC m=+1084.752581180" Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.524003 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h9mzc" podStartSLOduration=3.52397979 podStartE2EDuration="3.52397979s" podCreationTimestamp="2025-11-24 12:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:13:55.522827648 +0000 UTC m=+1084.766661417" watchObservedRunningTime="2025-11-24 12:13:55.52397979 +0000 UTC m=+1084.767813559" Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.553302 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8684f6cd6d-mwlp6" podStartSLOduration=34.28138565 podStartE2EDuration="38.553284376s" podCreationTimestamp="2025-11-24 12:13:17 +0000 UTC" firstStartedPulling="2025-11-24 12:13:49.97267386 +0000 UTC m=+1079.216507629" lastFinishedPulling="2025-11-24 12:13:54.244572586 +0000 UTC m=+1083.488406355" observedRunningTime="2025-11-24 12:13:55.544006801 +0000 UTC m=+1084.787840590" watchObservedRunningTime="2025-11-24 12:13:55.553284376 +0000 UTC m=+1084.797118145" Nov 24 12:13:55 crc kubenswrapper[4782]: I1124 12:13:55.589832 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tzz9s" podStartSLOduration=3.597479104 podStartE2EDuration="47.58981578s" podCreationTimestamp="2025-11-24 12:13:08 +0000 UTC" firstStartedPulling="2025-11-24 12:13:10.261240938 +0000 UTC m=+1039.505074707" lastFinishedPulling="2025-11-24 12:13:54.253577614 +0000 UTC m=+1083.497411383" observedRunningTime="2025-11-24 12:13:55.584805123 +0000 UTC m=+1084.828638892" watchObservedRunningTime="2025-11-24 12:13:55.58981578 +0000 UTC m=+1084.833649549" Nov 24 12:13:56 crc kubenswrapper[4782]: I1124 12:13:56.509793 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerStarted","Data":"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87"} Nov 24 12:13:57 crc kubenswrapper[4782]: I1124 12:13:57.659794 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:57 crc kubenswrapper[4782]: I1124 12:13:57.660081 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:13:57 crc kubenswrapper[4782]: I1124 12:13:57.765549 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:13:57 crc kubenswrapper[4782]: I1124 12:13:57.765890 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:14:00 crc kubenswrapper[4782]: I1124 12:14:00.410866 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:14:00 crc kubenswrapper[4782]: I1124 12:14:00.411253 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:07 crc kubenswrapper[4782]: I1124 12:14:07.662896 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:14:07 crc kubenswrapper[4782]: I1124 12:14:07.767779 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:14:08 crc kubenswrapper[4782]: I1124 12:14:08.611891 4782 generic.go:334] "Generic (PLEG): container finished" podID="cce98ec2-7dab-420c-8f56-e80c874419eb" containerID="0eb17af01c443abb5d317744ebbeeb45b2c7f9f955228287a39c10c5e37781d9" exitCode=0 Nov 24 12:14:08 crc kubenswrapper[4782]: I1124 12:14:08.611935 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nftg9" event={"ID":"cce98ec2-7dab-420c-8f56-e80c874419eb","Type":"ContainerDied","Data":"0eb17af01c443abb5d317744ebbeeb45b2c7f9f955228287a39c10c5e37781d9"} Nov 24 12:14:09 crc kubenswrapper[4782]: E1124 12:14:09.683857 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 24 12:14:09 crc kubenswrapper[4782]: E1124 12:14:09.684286 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvft7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(33180ffd-5192-4cda-becb-cd323c7bd0ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.127590 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nftg9" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.261737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle\") pod \"cce98ec2-7dab-420c-8f56-e80c874419eb\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.261873 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data\") pod \"cce98ec2-7dab-420c-8f56-e80c874419eb\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.261900 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvf48\" (UniqueName: \"kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48\") pod \"cce98ec2-7dab-420c-8f56-e80c874419eb\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.262015 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data\") pod \"cce98ec2-7dab-420c-8f56-e80c874419eb\" (UID: \"cce98ec2-7dab-420c-8f56-e80c874419eb\") " Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.269993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cce98ec2-7dab-420c-8f56-e80c874419eb" (UID: "cce98ec2-7dab-420c-8f56-e80c874419eb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.272695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48" (OuterVolumeSpecName: "kube-api-access-gvf48") pod "cce98ec2-7dab-420c-8f56-e80c874419eb" (UID: "cce98ec2-7dab-420c-8f56-e80c874419eb"). InnerVolumeSpecName "kube-api-access-gvf48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.288048 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cce98ec2-7dab-420c-8f56-e80c874419eb" (UID: "cce98ec2-7dab-420c-8f56-e80c874419eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.312860 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data" (OuterVolumeSpecName: "config-data") pod "cce98ec2-7dab-420c-8f56-e80c874419eb" (UID: "cce98ec2-7dab-420c-8f56-e80c874419eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.363380 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.363419 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.363427 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvf48\" (UniqueName: \"kubernetes.io/projected/cce98ec2-7dab-420c-8f56-e80c874419eb-kube-api-access-gvf48\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.363438 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cce98ec2-7dab-420c-8f56-e80c874419eb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.633896 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nftg9" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.633927 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nftg9" event={"ID":"cce98ec2-7dab-420c-8f56-e80c874419eb","Type":"ContainerDied","Data":"e42b450259b87d713302a0007a7e594a3d002475f66ae1f9cebe7ee53fc5c7d7"} Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.634331 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42b450259b87d713302a0007a7e594a3d002475f66ae1f9cebe7ee53fc5c7d7" Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.636253 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dfw9r" event={"ID":"4e814aae-c22b-41ff-bf86-0cbe5a766eab","Type":"ContainerStarted","Data":"5d29ae5715bd38adf7086de32933e95566eb28a5bc75e7fc2621d447f44a67d1"} Nov 24 12:14:10 crc kubenswrapper[4782]: I1124 12:14:10.679771 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dfw9r" podStartSLOduration=2.8809715799999998 podStartE2EDuration="1m2.679747609s" podCreationTimestamp="2025-11-24 12:13:08 +0000 UTC" firstStartedPulling="2025-11-24 12:13:10.09186174 +0000 UTC m=+1039.335695509" lastFinishedPulling="2025-11-24 12:14:09.890637779 +0000 UTC m=+1099.134471538" observedRunningTime="2025-11-24 12:14:10.66196616 +0000 UTC m=+1099.905799929" watchObservedRunningTime="2025-11-24 12:14:10.679747609 +0000 UTC m=+1099.923581378" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.195048 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:14:11 crc kubenswrapper[4782]: E1124 12:14:11.198207 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" containerName="glance-db-sync" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.198305 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" containerName="glance-db-sync" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.198657 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" containerName="glance-db-sync" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.199990 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.205488 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.320223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv87p\" (UniqueName: \"kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.320476 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.320663 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.320770 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.320908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.321026 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422470 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422546 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422586 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422631 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv87p\" (UniqueName: \"kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.422670 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.423329 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.423668 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.423724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.423809 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.423916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.446396 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv87p\" (UniqueName: \"kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p\") pod \"dnsmasq-dns-57c957c4ff-vwmjc\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.526806 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.659198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pv8t7" event={"ID":"73188696-c109-46f8-985b-6f5e9ef5b787","Type":"ContainerStarted","Data":"5021bb4a51cda786f28f1d047b9f835406e4184b33c5fecacfed73e03ce2c28b"} Nov 24 12:14:11 crc kubenswrapper[4782]: I1124 12:14:11.690653 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pv8t7" podStartSLOduration=3.716370422 podStartE2EDuration="1m3.690637148s" podCreationTimestamp="2025-11-24 12:13:08 +0000 UTC" firstStartedPulling="2025-11-24 12:13:10.051739646 +0000 UTC m=+1039.295573415" lastFinishedPulling="2025-11-24 12:14:10.026006372 +0000 UTC m=+1099.269840141" observedRunningTime="2025-11-24 12:14:11.686090213 +0000 UTC m=+1100.929923982" watchObservedRunningTime="2025-11-24 12:14:11.690637148 +0000 UTC m=+1100.934470917" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.043159 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.044768 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.050928 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.051387 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jhhvs" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.051705 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.058716 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.134149 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:14:12 crc kubenswrapper[4782]: W1124 12:14:12.138437 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c88e9bd_ca23_4af3_b79f_40ed8871dd16.slice/crio-1d2b87db504dbe08fbe70d21b30daa1d14cf94bc33e3c4529232d48749500f1c WatchSource:0}: Error finding container 1d2b87db504dbe08fbe70d21b30daa1d14cf94bc33e3c4529232d48749500f1c: Status 404 returned error can't find the container with id 1d2b87db504dbe08fbe70d21b30daa1d14cf94bc33e3c4529232d48749500f1c Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.140734 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141136 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8snn\" (UniqueName: \"kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141511 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.141624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.242647 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.242966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243010 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243095 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243118 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243146 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243168 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8snn\" (UniqueName: \"kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243471 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243490 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.243800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.264937 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.266827 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.298074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8snn\" (UniqueName: \"kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.313738 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.366909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.448558 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.450326 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.453043 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.463968 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558743 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558865 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558886 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.558915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55t9\" (UniqueName: \"kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660239 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660655 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660702 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660731 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55t9\" (UniqueName: \"kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660775 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.660805 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.661992 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.662338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.662519 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.668782 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.669267 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.686933 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.687739 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.689387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55t9\" (UniqueName: \"kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.699517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" event={"ID":"6c88e9bd-ca23-4af3-b79f-40ed8871dd16","Type":"ContainerStarted","Data":"1d2b87db504dbe08fbe70d21b30daa1d14cf94bc33e3c4529232d48749500f1c"} Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.717900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:12 crc kubenswrapper[4782]: I1124 12:14:12.779301 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.280093 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:13 crc kubenswrapper[4782]: W1124 12:14:13.291742 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a5cebd5_fd3d_4d9c_bc8e_0b59eff54285.slice/crio-4820deb0c15649c7c277fceca839a5426379f9f9183e410066f11311fe98cb50 WatchSource:0}: Error finding container 4820deb0c15649c7c277fceca839a5426379f9f9183e410066f11311fe98cb50: Status 404 returned error can't find the container with id 4820deb0c15649c7c277fceca839a5426379f9f9183e410066f11311fe98cb50 Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.509507 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:13 crc kubenswrapper[4782]: W1124 12:14:13.512775 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0888170b_4324_481c_bb3e_8ec78c99d715.slice/crio-a29a6957b4fcf0071b89cb2f0cc3d3447bc5489eda82c33255e9c2bba0974d81 WatchSource:0}: Error finding container a29a6957b4fcf0071b89cb2f0cc3d3447bc5489eda82c33255e9c2bba0974d81: Status 404 returned error can't find the container with id a29a6957b4fcf0071b89cb2f0cc3d3447bc5489eda82c33255e9c2bba0974d81 Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.713682 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerStarted","Data":"a29a6957b4fcf0071b89cb2f0cc3d3447bc5489eda82c33255e9c2bba0974d81"} Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.715063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerStarted","Data":"4820deb0c15649c7c277fceca839a5426379f9f9183e410066f11311fe98cb50"} Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.717908 4782 generic.go:334] "Generic (PLEG): container finished" podID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerID="b545d1381f61ad9dec6b976417a404d1a5c2c0be51f251e14d1b1ab36611888b" exitCode=0 Nov 24 12:14:13 crc kubenswrapper[4782]: I1124 12:14:13.717935 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" event={"ID":"6c88e9bd-ca23-4af3-b79f-40ed8871dd16","Type":"ContainerDied","Data":"b545d1381f61ad9dec6b976417a404d1a5c2c0be51f251e14d1b1ab36611888b"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.017819 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.110231 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.751781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerStarted","Data":"a5217e69685314981823db80b381626debee08c0a07a82920f7e257875804461"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.755884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerStarted","Data":"db93bf93655b83f59400fd70bf6cc59eeb5c97853c909f1ba517396c4aac8bbc"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.761204 4782 generic.go:334] "Generic (PLEG): container finished" podID="19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" containerID="0ea2bc84c459d47ee1feae9ea9546d4f173bb942ae5e3a16e21caf912b056c8f" exitCode=0 Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.761286 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9mzc" event={"ID":"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd","Type":"ContainerDied","Data":"0ea2bc84c459d47ee1feae9ea9546d4f173bb942ae5e3a16e21caf912b056c8f"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.767705 4782 generic.go:334] "Generic (PLEG): container finished" podID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" containerID="6d51d8793fb378702d4a1b4e38bab1e7c00a0bcee1e88b8045d2a0a11797e248" exitCode=0 Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.767808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzz9s" event={"ID":"14d52143-6c70-4d37-9829-c6ce79b2b8ee","Type":"ContainerDied","Data":"6d51d8793fb378702d4a1b4e38bab1e7c00a0bcee1e88b8045d2a0a11797e248"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.773285 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" event={"ID":"6c88e9bd-ca23-4af3-b79f-40ed8871dd16","Type":"ContainerStarted","Data":"51c4fcde52b0aec66588b5115a9d37ef1c91db7055616a8e59f417080e43ebb6"} Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.774199 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:14 crc kubenswrapper[4782]: I1124 12:14:14.811348 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" podStartSLOduration=3.811321226 podStartE2EDuration="3.811321226s" podCreationTimestamp="2025-11-24 12:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:14.803776959 +0000 UTC m=+1104.047610748" watchObservedRunningTime="2025-11-24 12:14:14.811321226 +0000 UTC m=+1104.055155015" Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.790467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerStarted","Data":"c691cf1a95b6dc23309dfad4a74a925dcca10b08c37d79361ebff0f7fa26cc09"} Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.791163 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-log" containerID="cri-o://a5217e69685314981823db80b381626debee08c0a07a82920f7e257875804461" gracePeriod=30 Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.791749 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-httpd" containerID="cri-o://c691cf1a95b6dc23309dfad4a74a925dcca10b08c37d79361ebff0f7fa26cc09" gracePeriod=30 Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.810218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerStarted","Data":"5f10c964330af4aa62a06f286afc4b263b769f272c6d5f986e45e5b3411872a8"} Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.811135 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-log" containerID="cri-o://db93bf93655b83f59400fd70bf6cc59eeb5c97853c909f1ba517396c4aac8bbc" gracePeriod=30 Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.811172 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-httpd" containerID="cri-o://5f10c964330af4aa62a06f286afc4b263b769f272c6d5f986e45e5b3411872a8" gracePeriod=30 Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.836338 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.836322113 podStartE2EDuration="4.836322113s" podCreationTimestamp="2025-11-24 12:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:15.830943415 +0000 UTC m=+1105.074777184" watchObservedRunningTime="2025-11-24 12:14:15.836322113 +0000 UTC m=+1105.080155872" Nov 24 12:14:15 crc kubenswrapper[4782]: I1124 12:14:15.875170 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.875150951 podStartE2EDuration="5.875150951s" podCreationTimestamp="2025-11-24 12:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:15.869179016 +0000 UTC m=+1105.113012785" watchObservedRunningTime="2025-11-24 12:14:15.875150951 +0000 UTC m=+1105.118984730" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.310785 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzz9s" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.390501 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.458357 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ppch\" (UniqueName: \"kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch\") pod \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.458601 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle\") pod \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.458687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data\") pod \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.458814 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts\") pod \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.458841 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs\") pod \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\" (UID: \"14d52143-6c70-4d37-9829-c6ce79b2b8ee\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.459081 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.459394 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs" (OuterVolumeSpecName: "logs") pod "14d52143-6c70-4d37-9829-c6ce79b2b8ee" (UID: "14d52143-6c70-4d37-9829-c6ce79b2b8ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.462399 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.462524 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42xnk\" (UniqueName: \"kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.463071 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d52143-6c70-4d37-9829-c6ce79b2b8ee-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.465158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts" (OuterVolumeSpecName: "scripts") pod "14d52143-6c70-4d37-9829-c6ce79b2b8ee" (UID: "14d52143-6c70-4d37-9829-c6ce79b2b8ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.465608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.469637 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch" (OuterVolumeSpecName: "kube-api-access-5ppch") pod "14d52143-6c70-4d37-9829-c6ce79b2b8ee" (UID: "14d52143-6c70-4d37-9829-c6ce79b2b8ee"). InnerVolumeSpecName "kube-api-access-5ppch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.473297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk" (OuterVolumeSpecName: "kube-api-access-42xnk") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "kube-api-access-42xnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.489707 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14d52143-6c70-4d37-9829-c6ce79b2b8ee" (UID: "14d52143-6c70-4d37-9829-c6ce79b2b8ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.496500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.497412 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data" (OuterVolumeSpecName: "config-data") pod "14d52143-6c70-4d37-9829-c6ce79b2b8ee" (UID: "14d52143-6c70-4d37-9829-c6ce79b2b8ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.563892 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.563945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.563970 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys\") pod \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\" (UID: \"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd\") " Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564309 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564328 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564340 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d52143-6c70-4d37-9829-c6ce79b2b8ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564349 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564357 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564366 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42xnk\" (UniqueName: \"kubernetes.io/projected/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-kube-api-access-42xnk\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.564402 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ppch\" (UniqueName: \"kubernetes.io/projected/14d52143-6c70-4d37-9829-c6ce79b2b8ee-kube-api-access-5ppch\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.567048 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.569309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts" (OuterVolumeSpecName: "scripts") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.585738 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data" (OuterVolumeSpecName: "config-data") pod "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" (UID: "19dc64c0-2cc8-4721-bb12-8723e6e6c6dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.665740 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.665774 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.665785 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.839434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzz9s" event={"ID":"14d52143-6c70-4d37-9829-c6ce79b2b8ee","Type":"ContainerDied","Data":"9ce0484a17b7b433231ecf3643238481cc9ee8cec3afc133f81dfa9f9e7a7fcb"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.839485 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce0484a17b7b433231ecf3643238481cc9ee8cec3afc133f81dfa9f9e7a7fcb" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.839597 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzz9s" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.851141 4782 generic.go:334] "Generic (PLEG): container finished" podID="0888170b-4324-481c-bb3e-8ec78c99d715" containerID="c691cf1a95b6dc23309dfad4a74a925dcca10b08c37d79361ebff0f7fa26cc09" exitCode=0 Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.851181 4782 generic.go:334] "Generic (PLEG): container finished" podID="0888170b-4324-481c-bb3e-8ec78c99d715" containerID="a5217e69685314981823db80b381626debee08c0a07a82920f7e257875804461" exitCode=143 Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.851235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerDied","Data":"c691cf1a95b6dc23309dfad4a74a925dcca10b08c37d79361ebff0f7fa26cc09"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.851268 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerDied","Data":"a5217e69685314981823db80b381626debee08c0a07a82920f7e257875804461"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.882344 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54579c9c49-nkmgh"] Nov 24 12:14:16 crc kubenswrapper[4782]: E1124 12:14:16.882818 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" containerName="keystone-bootstrap" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.882842 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" containerName="keystone-bootstrap" Nov 24 12:14:16 crc kubenswrapper[4782]: E1124 12:14:16.882862 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" containerName="placement-db-sync" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.882870 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" containerName="placement-db-sync" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.883082 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" containerName="keystone-bootstrap" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.883142 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" containerName="placement-db-sync" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.883662 4782 generic.go:334] "Generic (PLEG): container finished" podID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerID="5f10c964330af4aa62a06f286afc4b263b769f272c6d5f986e45e5b3411872a8" exitCode=0 Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.883696 4782 generic.go:334] "Generic (PLEG): container finished" podID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerID="db93bf93655b83f59400fd70bf6cc59eeb5c97853c909f1ba517396c4aac8bbc" exitCode=143 Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.884178 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerDied","Data":"5f10c964330af4aa62a06f286afc4b263b769f272c6d5f986e45e5b3411872a8"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.884227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerDied","Data":"db93bf93655b83f59400fd70bf6cc59eeb5c97853c909f1ba517396c4aac8bbc"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.884317 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.888697 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.888914 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.894740 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9mzc" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.894908 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9mzc" event={"ID":"19dc64c0-2cc8-4721-bb12-8723e6e6c6dd","Type":"ContainerDied","Data":"de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa"} Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.894955 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4daff3c2d94451e6afe5ca1d3f643195e78500cbfbbda487287385edb17caa" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.925221 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54579c9c49-nkmgh"] Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976268 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-scripts\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976347 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-internal-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976421 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-config-data\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-fernet-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976526 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-public-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-credential-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976592 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mw5s\" (UniqueName: \"kubernetes.io/projected/4b6ef93c-ca86-4207-8cba-0cd8bc486889-kube-api-access-8mw5s\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.976636 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-combined-ca-bundle\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.992615 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-548457c99b-pdf6j"] Nov 24 12:14:16 crc kubenswrapper[4782]: I1124 12:14:16.994315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.000137 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.000431 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dk5wn" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.000590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.000797 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.001132 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.041647 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548457c99b-pdf6j"] Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mw5s\" (UniqueName: \"kubernetes.io/projected/4b6ef93c-ca86-4207-8cba-0cd8bc486889-kube-api-access-8mw5s\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080317 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-combined-ca-bundle\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-scripts\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080416 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b571494b-eadd-44e4-b7cd-122dbbaddef5-logs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080449 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6t5d\" (UniqueName: \"kubernetes.io/projected/b571494b-eadd-44e4-b7cd-122dbbaddef5-kube-api-access-l6t5d\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080477 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-config-data\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-internal-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-config-data\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080584 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-public-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080635 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-fernet-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080684 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-internal-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-scripts\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-public-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080783 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-credential-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.080808 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-combined-ca-bundle\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.086129 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-scripts\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.088633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-public-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.089287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-combined-ca-bundle\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.091170 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-fernet-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.092146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-credential-keys\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.092229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-config-data\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.093018 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6ef93c-ca86-4207-8cba-0cd8bc486889-internal-tls-certs\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.104769 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mw5s\" (UniqueName: \"kubernetes.io/projected/4b6ef93c-ca86-4207-8cba-0cd8bc486889-kube-api-access-8mw5s\") pod \"keystone-54579c9c49-nkmgh\" (UID: \"4b6ef93c-ca86-4207-8cba-0cd8bc486889\") " pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.187198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b571494b-eadd-44e4-b7cd-122dbbaddef5-logs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.189528 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b571494b-eadd-44e4-b7cd-122dbbaddef5-logs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.189625 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6t5d\" (UniqueName: \"kubernetes.io/projected/b571494b-eadd-44e4-b7cd-122dbbaddef5-kube-api-access-l6t5d\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.189665 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-config-data\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.190151 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-public-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.190814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-internal-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.190873 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-scripts\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.190960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-combined-ca-bundle\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.194796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-config-data\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.195403 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-scripts\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.197598 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-combined-ca-bundle\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.209605 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-public-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.228424 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.236264 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6t5d\" (UniqueName: \"kubernetes.io/projected/b571494b-eadd-44e4-b7cd-122dbbaddef5-kube-api-access-l6t5d\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.238099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b571494b-eadd-44e4-b7cd-122dbbaddef5-internal-tls-certs\") pod \"placement-548457c99b-pdf6j\" (UID: \"b571494b-eadd-44e4-b7cd-122dbbaddef5\") " pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.321964 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.661061 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:14:17 crc kubenswrapper[4782]: I1124 12:14:17.765866 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.528562 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.580890 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.581687 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="dnsmasq-dns" containerID="cri-o://87948aefb64001a653f8aaa922236adbb65aae9f2b6f78c29163eaab77a9faaa" gracePeriod=10 Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.975064 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.975134 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.999036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0888170b-4324-481c-bb3e-8ec78c99d715","Type":"ContainerDied","Data":"a29a6957b4fcf0071b89cb2f0cc3d3447bc5489eda82c33255e9c2bba0974d81"} Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.999091 4782 scope.go:117] "RemoveContainer" containerID="c691cf1a95b6dc23309dfad4a74a925dcca10b08c37d79361ebff0f7fa26cc09" Nov 24 12:14:21 crc kubenswrapper[4782]: I1124 12:14:21.999225 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.017039 4782 generic.go:334] "Generic (PLEG): container finished" podID="57488019-5421-4f55-a15f-1012f7504ae7" containerID="87948aefb64001a653f8aaa922236adbb65aae9f2b6f78c29163eaab77a9faaa" exitCode=0 Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.017113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" event={"ID":"57488019-5421-4f55-a15f-1012f7504ae7","Type":"ContainerDied","Data":"87948aefb64001a653f8aaa922236adbb65aae9f2b6f78c29163eaab77a9faaa"} Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.035744 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285","Type":"ContainerDied","Data":"4820deb0c15649c7c277fceca839a5426379f9f9183e410066f11311fe98cb50"} Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.035839 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090265 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55t9\" (UniqueName: \"kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090316 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090364 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090410 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090531 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8snn\" (UniqueName: \"kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090593 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090664 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090685 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.090712 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs\") pod \"0888170b-4324-481c-bb3e-8ec78c99d715\" (UID: \"0888170b-4324-481c-bb3e-8ec78c99d715\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.092614 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.092645 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle\") pod \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\" (UID: \"9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285\") " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.093950 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs" (OuterVolumeSpecName: "logs") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.098676 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs" (OuterVolumeSpecName: "logs") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.099161 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.099335 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts" (OuterVolumeSpecName: "scripts") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.099663 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.099892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.103397 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9" (OuterVolumeSpecName: "kube-api-access-r55t9") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "kube-api-access-r55t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.123729 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.168201 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts" (OuterVolumeSpecName: "scripts") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.172764 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn" (OuterVolumeSpecName: "kube-api-access-f8snn") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "kube-api-access-f8snn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199250 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199615 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199702 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0888170b-4324-481c-bb3e-8ec78c99d715-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199760 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199812 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55t9\" (UniqueName: \"kubernetes.io/projected/0888170b-4324-481c-bb3e-8ec78c99d715-kube-api-access-r55t9\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199880 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199937 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.199993 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.200049 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8snn\" (UniqueName: \"kubernetes.io/projected/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-kube-api-access-f8snn\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.200120 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.203680 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data" (OuterVolumeSpecName: "config-data") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.205182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0888170b-4324-481c-bb3e-8ec78c99d715" (UID: "0888170b-4324-481c-bb3e-8ec78c99d715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.221191 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.225991 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.248772 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data" (OuterVolumeSpecName: "config-data") pod "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" (UID: "9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.260536 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300754 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300778 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300788 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300796 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0888170b-4324-481c-bb3e-8ec78c99d715-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300804 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.300811 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.357801 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.374473 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385026 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: E1124 12:14:22.385424 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385439 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: E1124 12:14:22.385454 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385461 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: E1124 12:14:22.385476 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385481 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: E1124 12:14:22.385496 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385503 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385687 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385705 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385719 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-httpd" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.385730 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" containerName="glance-log" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.386659 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.392060 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.392290 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.392474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jhhvs" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.392613 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.397581 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.425671 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.437789 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.452290 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.454651 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.460316 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.460403 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.461681 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.510794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58gc\" (UniqueName: \"kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.510887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.510919 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.510973 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.511014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.511079 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.511102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.511125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.612919 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.612983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613109 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613232 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58gc\" (UniqueName: \"kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613261 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613297 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613327 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613389 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613413 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnt8\" (UniqueName: \"kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613441 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613490 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613509 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.613535 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.615196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.615455 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.615622 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.631679 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.632634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.633487 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58gc\" (UniqueName: \"kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.641339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.643075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.662121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715076 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715132 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715168 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715202 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnt8\" (UniqueName: \"kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.715309 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.716186 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.716493 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.716934 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.719323 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.719673 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.719735 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.725819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.729893 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.738499 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnt8\" (UniqueName: \"kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.761583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " pod="openstack/glance-default-external-api-0" Nov 24 12:14:22 crc kubenswrapper[4782]: I1124 12:14:22.791907 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.502064 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0888170b-4324-481c-bb3e-8ec78c99d715" path="/var/lib/kubelet/pods/0888170b-4324-481c-bb3e-8ec78c99d715/volumes" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.503640 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285" path="/var/lib/kubelet/pods/9a5cebd5-fd3d-4d9c-bc8e-0b59eff54285/volumes" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.568019 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739017 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739108 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739173 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739243 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6fq\" (UniqueName: \"kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739320 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.739337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb\") pod \"57488019-5421-4f55-a15f-1012f7504ae7\" (UID: \"57488019-5421-4f55-a15f-1012f7504ae7\") " Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.762283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq" (OuterVolumeSpecName: "kube-api-access-wp6fq") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "kube-api-access-wp6fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.789457 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config" (OuterVolumeSpecName: "config") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.820715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.833654 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.833681 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.836636 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57488019-5421-4f55-a15f-1012f7504ae7" (UID: "57488019-5421-4f55-a15f-1012f7504ae7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843476 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6fq\" (UniqueName: \"kubernetes.io/projected/57488019-5421-4f55-a15f-1012f7504ae7-kube-api-access-wp6fq\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843510 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843520 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843530 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843538 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:23 crc kubenswrapper[4782]: I1124 12:14:23.843546 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57488019-5421-4f55-a15f-1012f7504ae7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.055782 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" event={"ID":"57488019-5421-4f55-a15f-1012f7504ae7","Type":"ContainerDied","Data":"f36f292bd0eeea1f7cd5f9efec70130256050a09272ae979a88b2a6e810020eb"} Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.055874 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-lqbgf" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.087438 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.097241 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-lqbgf"] Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.327321 4782 scope.go:117] "RemoveContainer" containerID="a5217e69685314981823db80b381626debee08c0a07a82920f7e257875804461" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.433024 4782 scope.go:117] "RemoveContainer" containerID="5f10c964330af4aa62a06f286afc4b263b769f272c6d5f986e45e5b3411872a8" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.518964 4782 scope.go:117] "RemoveContainer" containerID="db93bf93655b83f59400fd70bf6cc59eeb5c97853c909f1ba517396c4aac8bbc" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.551692 4782 scope.go:117] "RemoveContainer" containerID="87948aefb64001a653f8aaa922236adbb65aae9f2b6f78c29163eaab77a9faaa" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.592701 4782 scope.go:117] "RemoveContainer" containerID="5f3363f9757fb3e434946c9971b19eb9baf7f8d6b0e35ef688584e7d3fe03696" Nov 24 12:14:24 crc kubenswrapper[4782]: E1124 12:14:24.622076 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.647823 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54579c9c49-nkmgh"] Nov 24 12:14:24 crc kubenswrapper[4782]: W1124 12:14:24.662083 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b6ef93c_ca86_4207_8cba_0cd8bc486889.slice/crio-1645ab349628e3670ea86991b0ae45b1e70e31fab390e44fee020bb7b1e16ac9 WatchSource:0}: Error finding container 1645ab349628e3670ea86991b0ae45b1e70e31fab390e44fee020bb7b1e16ac9: Status 404 returned error can't find the container with id 1645ab349628e3670ea86991b0ae45b1e70e31fab390e44fee020bb7b1e16ac9 Nov 24 12:14:24 crc kubenswrapper[4782]: W1124 12:14:24.983495 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97916bf7_05b5_442a_b908_3f0e20f4badb.slice/crio-6904483e761cbf69b277f2c03d084ab998c35a5a6b9b05c3a50715628b2bd298 WatchSource:0}: Error finding container 6904483e761cbf69b277f2c03d084ab998c35a5a6b9b05c3a50715628b2bd298: Status 404 returned error can't find the container with id 6904483e761cbf69b277f2c03d084ab998c35a5a6b9b05c3a50715628b2bd298 Nov 24 12:14:24 crc kubenswrapper[4782]: I1124 12:14:24.994286 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.062805 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548457c99b-pdf6j"] Nov 24 12:14:25 crc kubenswrapper[4782]: W1124 12:14:25.069209 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb571494b_eadd_44e4_b7cd_122dbbaddef5.slice/crio-64c0f6cdb12afd748eae73ec841edcc6ac2c2fd97e6e40c0577e221e2cfa0027 WatchSource:0}: Error finding container 64c0f6cdb12afd748eae73ec841edcc6ac2c2fd97e6e40c0577e221e2cfa0027: Status 404 returned error can't find the container with id 64c0f6cdb12afd748eae73ec841edcc6ac2c2fd97e6e40c0577e221e2cfa0027 Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.085577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerStarted","Data":"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79"} Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.085733 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="ceilometer-notification-agent" containerID="cri-o://bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87" gracePeriod=30 Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.085980 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.086328 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="proxy-httpd" containerID="cri-o://4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79" gracePeriod=30 Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.089550 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54579c9c49-nkmgh" event={"ID":"4b6ef93c-ca86-4207-8cba-0cd8bc486889","Type":"ContainerStarted","Data":"1645ab349628e3670ea86991b0ae45b1e70e31fab390e44fee020bb7b1e16ac9"} Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.101851 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerStarted","Data":"6904483e761cbf69b277f2c03d084ab998c35a5a6b9b05c3a50715628b2bd298"} Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.190004 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:14:25 crc kubenswrapper[4782]: I1124 12:14:25.502972 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57488019-5421-4f55-a15f-1012f7504ae7" path="/var/lib/kubelet/pods/57488019-5421-4f55-a15f-1012f7504ae7/volumes" Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.127110 4782 generic.go:334] "Generic (PLEG): container finished" podID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerID="4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79" exitCode=0 Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.127477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerDied","Data":"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.132019 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54579c9c49-nkmgh" event={"ID":"4b6ef93c-ca86-4207-8cba-0cd8bc486889","Type":"ContainerStarted","Data":"97a2e940779fa85b1c4dfe225b0ec1f89ef55e218b296d2f341815fd0cc70b69"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.132626 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.138095 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548457c99b-pdf6j" event={"ID":"b571494b-eadd-44e4-b7cd-122dbbaddef5","Type":"ContainerStarted","Data":"24483067b21c59b1d63b230e13d664b9bed1854029a4dd2b1fe3f48a1443fddd"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.138241 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548457c99b-pdf6j" event={"ID":"b571494b-eadd-44e4-b7cd-122dbbaddef5","Type":"ContainerStarted","Data":"02bedc15347bff933a98bbb68fc80f858e75b3ac08cb5a178d067b95223a3425"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.138301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548457c99b-pdf6j" event={"ID":"b571494b-eadd-44e4-b7cd-122dbbaddef5","Type":"ContainerStarted","Data":"64c0f6cdb12afd748eae73ec841edcc6ac2c2fd97e6e40c0577e221e2cfa0027"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.139073 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.139170 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.143859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerStarted","Data":"39d9858c06caf43bf4b860c62d64ae98ad1b60241013e4b16e462968e64000dc"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.144009 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerStarted","Data":"63274c5677cd90ae657301c023a0f978a282c0e192bd3d260b0cca52a21de395"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.166651 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerStarted","Data":"3d852f0ace2cb409980731475d75e86bd6e31fbde626b0acec9e1502d54ed6f4"} Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.176429 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54579c9c49-nkmgh" podStartSLOduration=10.176410251 podStartE2EDuration="10.176410251s" podCreationTimestamp="2025-11-24 12:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:26.168635187 +0000 UTC m=+1115.412468946" watchObservedRunningTime="2025-11-24 12:14:26.176410251 +0000 UTC m=+1115.420244020" Nov 24 12:14:26 crc kubenswrapper[4782]: I1124 12:14:26.193723 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-548457c99b-pdf6j" podStartSLOduration=10.193706107 podStartE2EDuration="10.193706107s" podCreationTimestamp="2025-11-24 12:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:26.191806884 +0000 UTC m=+1115.435640663" watchObservedRunningTime="2025-11-24 12:14:26.193706107 +0000 UTC m=+1115.437539876" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.182453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerStarted","Data":"6951aeeec7edb151fecd0156a9a76701e6509e4d3fe354c7be7d68f9407eb02a"} Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.186807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerStarted","Data":"5a16d7ae2626ec216e00a63076bd7ac1bd2e8be464407a67767d2413251d272b"} Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.210049 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.210028446 podStartE2EDuration="5.210028446s" podCreationTimestamp="2025-11-24 12:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:27.208228226 +0000 UTC m=+1116.452062005" watchObservedRunningTime="2025-11-24 12:14:27.210028446 +0000 UTC m=+1116.453862215" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.236046 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.23602192 podStartE2EDuration="5.23602192s" podCreationTimestamp="2025-11-24 12:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:27.232771361 +0000 UTC m=+1116.476605140" watchObservedRunningTime="2025-11-24 12:14:27.23602192 +0000 UTC m=+1116.479855689" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.660587 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.661176 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.662048 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8"} pod="openstack/horizon-8684f6cd6d-mwlp6" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.662159 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" containerID="cri-o://4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8" gracePeriod=30 Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.766281 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.766422 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.767274 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a"} pod="openstack/horizon-6574f9bb76-jkv6h" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:14:27 crc kubenswrapper[4782]: I1124 12:14:27.767328 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" containerID="cri-o://a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a" gracePeriod=30 Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.060463 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.187684 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvft7\" (UniqueName: \"kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.187807 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.187883 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.187913 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.187958 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.188019 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.188100 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd\") pod \"33180ffd-5192-4cda-becb-cd323c7bd0ca\" (UID: \"33180ffd-5192-4cda-becb-cd323c7bd0ca\") " Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.188395 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.188544 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.188695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.192831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts" (OuterVolumeSpecName: "scripts") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.193281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.204508 4782 generic.go:334] "Generic (PLEG): container finished" podID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" containerID="5d29ae5715bd38adf7086de32933e95566eb28a5bc75e7fc2621d447f44a67d1" exitCode=0 Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.204570 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dfw9r" event={"ID":"4e814aae-c22b-41ff-bf86-0cbe5a766eab","Type":"ContainerDied","Data":"5d29ae5715bd38adf7086de32933e95566eb28a5bc75e7fc2621d447f44a67d1"} Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.208826 4782 generic.go:334] "Generic (PLEG): container finished" podID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerID="bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87" exitCode=0 Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.208872 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerDied","Data":"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87"} Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.208902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33180ffd-5192-4cda-becb-cd323c7bd0ca","Type":"ContainerDied","Data":"e8535721bf83e2f2be10d079d47ce03800cb45f03f46be7790ee07dcf8fe5eae"} Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.208924 4782 scope.go:117] "RemoveContainer" containerID="4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.209085 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.214360 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7" (OuterVolumeSpecName: "kube-api-access-tvft7") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "kube-api-access-tvft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.289988 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.290203 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.290266 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33180ffd-5192-4cda-becb-cd323c7bd0ca-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.290320 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvft7\" (UniqueName: \"kubernetes.io/projected/33180ffd-5192-4cda-becb-cd323c7bd0ca-kube-api-access-tvft7\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.313749 4782 scope.go:117] "RemoveContainer" containerID="bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.335460 4782 scope.go:117] "RemoveContainer" containerID="4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79" Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.335896 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79\": container with ID starting with 4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79 not found: ID does not exist" containerID="4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.336016 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79"} err="failed to get container status \"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79\": rpc error: code = NotFound desc = could not find container \"4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79\": container with ID starting with 4126d75892e1262542978354ac89cb802793b6b80b3e063eea4034c5feb85d79 not found: ID does not exist" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.336094 4782 scope.go:117] "RemoveContainer" containerID="bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87" Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.336465 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87\": container with ID starting with bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87 not found: ID does not exist" containerID="bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.336548 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87"} err="failed to get container status \"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87\": rpc error: code = NotFound desc = could not find container \"bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87\": container with ID starting with bf32c298ad55035cd48a8855ea1e756803e06117571c2852c8a6b954cc768f87 not found: ID does not exist" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.336488 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.351625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data" (OuterVolumeSpecName: "config-data") pod "33180ffd-5192-4cda-becb-cd323c7bd0ca" (UID: "33180ffd-5192-4cda-becb-cd323c7bd0ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.392438 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.392478 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33180ffd-5192-4cda-becb-cd323c7bd0ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.584314 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.593769 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617117 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.617528 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="init" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="init" Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.617562 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="proxy-httpd" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617568 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="proxy-httpd" Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.617582 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="ceilometer-notification-agent" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617588 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="ceilometer-notification-agent" Nov 24 12:14:29 crc kubenswrapper[4782]: E1124 12:14:29.617607 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="dnsmasq-dns" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617613 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="dnsmasq-dns" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617760 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="proxy-httpd" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617786 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="57488019-5421-4f55-a15f-1012f7504ae7" containerName="dnsmasq-dns" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.617795 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" containerName="ceilometer-notification-agent" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.619270 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.621491 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.624083 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.624539 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798071 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8nd\" (UniqueName: \"kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798135 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798165 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.798237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899775 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899826 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8nd\" (UniqueName: \"kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899905 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.899973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.902681 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.902783 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.905801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.905965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.907047 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.918730 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.925501 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8nd\" (UniqueName: \"kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd\") pod \"ceilometer-0\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " pod="openstack/ceilometer-0" Nov 24 12:14:29 crc kubenswrapper[4782]: I1124 12:14:29.950478 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.390469 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.417604 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.417674 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.561735 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.722729 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc6ng\" (UniqueName: \"kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng\") pod \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.722843 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data\") pod \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.722942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle\") pod \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\" (UID: \"4e814aae-c22b-41ff-bf86-0cbe5a766eab\") " Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.728509 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng" (OuterVolumeSpecName: "kube-api-access-rc6ng") pod "4e814aae-c22b-41ff-bf86-0cbe5a766eab" (UID: "4e814aae-c22b-41ff-bf86-0cbe5a766eab"). InnerVolumeSpecName "kube-api-access-rc6ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.731523 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4e814aae-c22b-41ff-bf86-0cbe5a766eab" (UID: "4e814aae-c22b-41ff-bf86-0cbe5a766eab"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.749963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e814aae-c22b-41ff-bf86-0cbe5a766eab" (UID: "4e814aae-c22b-41ff-bf86-0cbe5a766eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.824895 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.824929 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc6ng\" (UniqueName: \"kubernetes.io/projected/4e814aae-c22b-41ff-bf86-0cbe5a766eab-kube-api-access-rc6ng\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:30 crc kubenswrapper[4782]: I1124 12:14:30.824939 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e814aae-c22b-41ff-bf86-0cbe5a766eab-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.230517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerStarted","Data":"8265cec06497a9db9812e78f634b8ed1257ef9ed4e28b2318e56dd87ac6ef87d"} Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.230557 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerStarted","Data":"77917b698f503312b555fa7b54fa3659682b66660aaf564144edf872240ce4aa"} Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.232065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dfw9r" event={"ID":"4e814aae-c22b-41ff-bf86-0cbe5a766eab","Type":"ContainerDied","Data":"0c1ac0edae95bc17d63a5c774e059278018804c9f0d1a749586d29177cf2062b"} Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.232087 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c1ac0edae95bc17d63a5c774e059278018804c9f0d1a749586d29177cf2062b" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.232136 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dfw9r" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.520267 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33180ffd-5192-4cda-becb-cd323c7bd0ca" path="/var/lib/kubelet/pods/33180ffd-5192-4cda-becb-cd323c7bd0ca/volumes" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.521803 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8668478d95-lb5cp"] Nov 24 12:14:31 crc kubenswrapper[4782]: E1124 12:14:31.524779 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" containerName="barbican-db-sync" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.524803 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" containerName="barbican-db-sync" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.525776 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" containerName="barbican-db-sync" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.566914 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.574934 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-567dd88794-rs7lm"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.578184 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.578355 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kgs76" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.579361 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.586771 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.588971 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.663062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.663109 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b310f8bf-62fa-4955-984a-1df40c4e3a38-logs\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.663170 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnr7f\" (UniqueName: \"kubernetes.io/projected/b310f8bf-62fa-4955-984a-1df40c4e3a38-kube-api-access-vnr7f\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.665405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data-custom\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.665529 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-combined-ca-bundle\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.674474 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8668478d95-lb5cp"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.719744 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-567dd88794-rs7lm"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.719802 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.721324 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.730403 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767386 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbzx\" (UniqueName: \"kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767465 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767481 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b310f8bf-62fa-4955-984a-1df40c4e3a38-logs\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767500 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767521 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data-custom\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767556 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnr7f\" (UniqueName: \"kubernetes.io/projected/b310f8bf-62fa-4955-984a-1df40c4e3a38-kube-api-access-vnr7f\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767582 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data-custom\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767645 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-combined-ca-bundle\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmw4v\" (UniqueName: \"kubernetes.io/projected/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-kube-api-access-qmw4v\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767710 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-combined-ca-bundle\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767738 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-logs\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.767756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.768733 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b310f8bf-62fa-4955-984a-1df40c4e3a38-logs\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.773193 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data-custom\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.774279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-combined-ca-bundle\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.776181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b310f8bf-62fa-4955-984a-1df40c4e3a38-config-data\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.793587 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnr7f\" (UniqueName: \"kubernetes.io/projected/b310f8bf-62fa-4955-984a-1df40c4e3a38-kube-api-access-vnr7f\") pod \"barbican-worker-8668478d95-lb5cp\" (UID: \"b310f8bf-62fa-4955-984a-1df40c4e3a38\") " pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.842162 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.843677 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.851473 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.852785 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871442 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871467 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzh6\" (UniqueName: \"kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqbzx\" (UniqueName: \"kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871511 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data-custom\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871592 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-combined-ca-bundle\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871609 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmw4v\" (UniqueName: \"kubernetes.io/projected/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-kube-api-access-qmw4v\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871673 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-logs\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871719 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.871765 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.872934 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.873516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.878478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.879065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.883035 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data-custom\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.875366 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-logs\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.896977 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-config-data\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.905198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.910545 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-combined-ca-bundle\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.916771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmw4v\" (UniqueName: \"kubernetes.io/projected/4f2c93b3-0f72-4e4e-bc85-c719e2e9954b-kube-api-access-qmw4v\") pod \"barbican-keystone-listener-567dd88794-rs7lm\" (UID: \"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b\") " pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.926226 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqbzx\" (UniqueName: \"kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx\") pod \"dnsmasq-dns-6d66f584d7-bp26f\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.935816 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8668478d95-lb5cp" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.954391 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.973130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.973322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.973420 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.973574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzh6\" (UniqueName: \"kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.973759 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.975144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.978957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.980984 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:31 crc kubenswrapper[4782]: I1124 12:14:31.988348 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.006844 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.007386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzh6\" (UniqueName: \"kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6\") pod \"barbican-api-754ccd5b54-bt86q\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.322041 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.377297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerStarted","Data":"eac58ec35a684dc2a79e654a7ab4e803740c61b7e7006d7588160888952b0344"} Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.722609 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.722652 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.789602 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8668478d95-lb5cp"] Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.798634 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.798692 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:14:32 crc kubenswrapper[4782]: W1124 12:14:32.868622 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb310f8bf_62fa_4955_984a_1df40c4e3a38.slice/crio-ff49fe9d9120a60fe28ec8ac40d112c1e5a4c52b3afd0277e461141e948cdf52 WatchSource:0}: Error finding container ff49fe9d9120a60fe28ec8ac40d112c1e5a4c52b3afd0277e461141e948cdf52: Status 404 returned error can't find the container with id ff49fe9d9120a60fe28ec8ac40d112c1e5a4c52b3afd0277e461141e948cdf52 Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.911259 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.960516 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-567dd88794-rs7lm"] Nov 24 12:14:32 crc kubenswrapper[4782]: I1124 12:14:32.985182 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.004349 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.012489 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.049328 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: W1124 12:14:33.066285 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f43951b_2d39_49a9_9b5d_023305c2e89a.slice/crio-a13a077d95a5fc3aa01ba86633644f181abfbb38a46d0b43fccdb6cc7d6ce924 WatchSource:0}: Error finding container a13a077d95a5fc3aa01ba86633644f181abfbb38a46d0b43fccdb6cc7d6ce924: Status 404 returned error can't find the container with id a13a077d95a5fc3aa01ba86633644f181abfbb38a46d0b43fccdb6cc7d6ce924 Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.379472 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.393990 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8668478d95-lb5cp" event={"ID":"b310f8bf-62fa-4955-984a-1df40c4e3a38","Type":"ContainerStarted","Data":"ff49fe9d9120a60fe28ec8ac40d112c1e5a4c52b3afd0277e461141e948cdf52"} Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.399111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" event={"ID":"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b","Type":"ContainerStarted","Data":"194e8c550eb20b74fb7723e3671568c7f471027fc34f5525e82ff3afde072025"} Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.441707 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" event={"ID":"9f43951b-2d39-49a9-9b5d-023305c2e89a","Type":"ContainerStarted","Data":"a13a077d95a5fc3aa01ba86633644f181abfbb38a46d0b43fccdb6cc7d6ce924"} Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.450791 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerStarted","Data":"059f9b11b21838144136d7d458556169521d3cbec5765b4f6cb35ea25b1b9dc1"} Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.450874 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.450894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.451141 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:14:33 crc kubenswrapper[4782]: I1124 12:14:33.451297 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.474547 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerStarted","Data":"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853"} Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.474900 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerStarted","Data":"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82"} Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.474919 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerStarted","Data":"5d55a65266ded68a44e79f73030f56dbf66cf409fbde5e42b4817baac2077243"} Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.474940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.481134 4782 generic.go:334] "Generic (PLEG): container finished" podID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerID="42d3b3054c7c9580dfb0cf064cad8c67bb18aa0399c76905ac8952f95fb5dd41" exitCode=0 Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.482239 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" event={"ID":"9f43951b-2d39-49a9-9b5d-023305c2e89a","Type":"ContainerDied","Data":"42d3b3054c7c9580dfb0cf064cad8c67bb18aa0399c76905ac8952f95fb5dd41"} Nov 24 12:14:34 crc kubenswrapper[4782]: I1124 12:14:34.501714 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-754ccd5b54-bt86q" podStartSLOduration=3.501698025 podStartE2EDuration="3.501698025s" podCreationTimestamp="2025-11-24 12:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:34.497772167 +0000 UTC m=+1123.741605946" watchObservedRunningTime="2025-11-24 12:14:34.501698025 +0000 UTC m=+1123.745531794" Nov 24 12:14:35 crc kubenswrapper[4782]: I1124 12:14:35.500382 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:35 crc kubenswrapper[4782]: I1124 12:14:35.500898 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:35 crc kubenswrapper[4782]: I1124 12:14:35.501786 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:35 crc kubenswrapper[4782]: I1124 12:14:35.501798 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:35 crc kubenswrapper[4782]: I1124 12:14:35.524287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.176960 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5584bf45bd-6fhhg"] Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.184415 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.207489 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.207820 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.239681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-internal-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.239888 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsjb\" (UniqueName: \"kubernetes.io/projected/de56c6c9-b982-419d-be5c-97f1f9379747-kube-api-access-zpsjb\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.239982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de56c6c9-b982-419d-be5c-97f1f9379747-logs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.240084 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.240143 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-public-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.240197 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-combined-ca-bundle\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.240236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data-custom\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.253553 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5584bf45bd-6fhhg"] Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.342349 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsjb\" (UniqueName: \"kubernetes.io/projected/de56c6c9-b982-419d-be5c-97f1f9379747-kube-api-access-zpsjb\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.342940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de56c6c9-b982-419d-be5c-97f1f9379747-logs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.342995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.343017 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-public-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.343039 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-combined-ca-bundle\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.343063 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data-custom\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.343090 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-internal-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.343759 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de56c6c9-b982-419d-be5c-97f1f9379747-logs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.419366 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-internal-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.428781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-public-tls-certs\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.429109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data-custom\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.432882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-combined-ca-bundle\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.438471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsjb\" (UniqueName: \"kubernetes.io/projected/de56c6c9-b982-419d-be5c-97f1f9379747-kube-api-access-zpsjb\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.438471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de56c6c9-b982-419d-be5c-97f1f9379747-config-data\") pod \"barbican-api-5584bf45bd-6fhhg\" (UID: \"de56c6c9-b982-419d-be5c-97f1f9379747\") " pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:36 crc kubenswrapper[4782]: I1124 12:14:36.509472 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.180113 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5584bf45bd-6fhhg"] Nov 24 12:14:37 crc kubenswrapper[4782]: W1124 12:14:37.184271 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde56c6c9_b982_419d_be5c_97f1f9379747.slice/crio-b01c3f548359cd84bf7fd32dccb9d256caae36e3874953a8aefdbc3fdb91a742 WatchSource:0}: Error finding container b01c3f548359cd84bf7fd32dccb9d256caae36e3874953a8aefdbc3fdb91a742: Status 404 returned error can't find the container with id b01c3f548359cd84bf7fd32dccb9d256caae36e3874953a8aefdbc3fdb91a742 Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.522795 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584bf45bd-6fhhg" event={"ID":"de56c6c9-b982-419d-be5c-97f1f9379747","Type":"ContainerStarted","Data":"9ee87ef44974375a4a114dcbb6e3bf54f8103c8f3a91168ebd75c77f31d7e812"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.522839 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584bf45bd-6fhhg" event={"ID":"de56c6c9-b982-419d-be5c-97f1f9379747","Type":"ContainerStarted","Data":"b01c3f548359cd84bf7fd32dccb9d256caae36e3874953a8aefdbc3fdb91a742"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.546977 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerStarted","Data":"ed407cd4ecbed8c12cb5475a7baad02beee17963944921dce1c10e088f1e294f"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.548238 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.553135 4782 generic.go:334] "Generic (PLEG): container finished" podID="73188696-c109-46f8-985b-6f5e9ef5b787" containerID="5021bb4a51cda786f28f1d047b9f835406e4184b33c5fecacfed73e03ce2c28b" exitCode=0 Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.553215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pv8t7" event={"ID":"73188696-c109-46f8-985b-6f5e9ef5b787","Type":"ContainerDied","Data":"5021bb4a51cda786f28f1d047b9f835406e4184b33c5fecacfed73e03ce2c28b"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.555474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8668478d95-lb5cp" event={"ID":"b310f8bf-62fa-4955-984a-1df40c4e3a38","Type":"ContainerStarted","Data":"d57634f425210f5152ebbada097ee17cf720bf5ea2d4632242ade822d7a47108"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.555522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8668478d95-lb5cp" event={"ID":"b310f8bf-62fa-4955-984a-1df40c4e3a38","Type":"ContainerStarted","Data":"266546c76ed8673eb5bde8d8e02080f5d77d353efc89635a4dd51d2181426e41"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.558787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" event={"ID":"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b","Type":"ContainerStarted","Data":"cc330eb765ff62259fbaa0a8fa06ca93da4c2fe456684f01ee9356315376dcc2"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.558830 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" event={"ID":"4f2c93b3-0f72-4e4e-bc85-c719e2e9954b","Type":"ContainerStarted","Data":"7f5f356d3a8605e34d12edf4c170515a637e38bb4c31e15f891682838df8eeed"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.562735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" event={"ID":"9f43951b-2d39-49a9-9b5d-023305c2e89a","Type":"ContainerStarted","Data":"391730c1a5db20c6a50d3253bdd50cd87fd88a6ded6a3d0f7a728aaed73d00d3"} Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.563129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.597226 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7248453919999998 podStartE2EDuration="8.59720008s" podCreationTimestamp="2025-11-24 12:14:29 +0000 UTC" firstStartedPulling="2025-11-24 12:14:30.445552921 +0000 UTC m=+1119.689386690" lastFinishedPulling="2025-11-24 12:14:36.317907609 +0000 UTC m=+1125.561741378" observedRunningTime="2025-11-24 12:14:37.585167379 +0000 UTC m=+1126.829001148" watchObservedRunningTime="2025-11-24 12:14:37.59720008 +0000 UTC m=+1126.841033859" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.634710 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8668478d95-lb5cp" podStartSLOduration=3.340249215 podStartE2EDuration="6.634691191s" podCreationTimestamp="2025-11-24 12:14:31 +0000 UTC" firstStartedPulling="2025-11-24 12:14:32.887783102 +0000 UTC m=+1122.131616871" lastFinishedPulling="2025-11-24 12:14:36.182225078 +0000 UTC m=+1125.426058847" observedRunningTime="2025-11-24 12:14:37.606948028 +0000 UTC m=+1126.850781797" watchObservedRunningTime="2025-11-24 12:14:37.634691191 +0000 UTC m=+1126.878524960" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.636651 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-567dd88794-rs7lm" podStartSLOduration=3.33095572 podStartE2EDuration="6.636642755s" podCreationTimestamp="2025-11-24 12:14:31 +0000 UTC" firstStartedPulling="2025-11-24 12:14:33.012867912 +0000 UTC m=+1122.256701691" lastFinishedPulling="2025-11-24 12:14:36.318554957 +0000 UTC m=+1125.562388726" observedRunningTime="2025-11-24 12:14:37.632635315 +0000 UTC m=+1126.876469104" watchObservedRunningTime="2025-11-24 12:14:37.636642755 +0000 UTC m=+1126.880476524" Nov 24 12:14:37 crc kubenswrapper[4782]: I1124 12:14:37.707765 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" podStartSLOduration=6.70774253 podStartE2EDuration="6.70774253s" podCreationTimestamp="2025-11-24 12:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:37.673300053 +0000 UTC m=+1126.917133832" watchObservedRunningTime="2025-11-24 12:14:37.70774253 +0000 UTC m=+1126.951576299" Nov 24 12:14:38 crc kubenswrapper[4782]: I1124 12:14:38.586289 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584bf45bd-6fhhg" event={"ID":"de56c6c9-b982-419d-be5c-97f1f9379747","Type":"ContainerStarted","Data":"d2a863b9b4f6e954bedaa64503b5add98cf53604f06bacedda50b01e237d5316"} Nov 24 12:14:38 crc kubenswrapper[4782]: I1124 12:14:38.589563 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:38 crc kubenswrapper[4782]: I1124 12:14:38.589596 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:38 crc kubenswrapper[4782]: I1124 12:14:38.611576 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5584bf45bd-6fhhg" podStartSLOduration=2.611550755 podStartE2EDuration="2.611550755s" podCreationTimestamp="2025-11-24 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:38.607526444 +0000 UTC m=+1127.851360223" watchObservedRunningTime="2025-11-24 12:14:38.611550755 +0000 UTC m=+1127.855384524" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.097662 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213352 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213453 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213491 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213548 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmx5\" (UniqueName: \"kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.213636 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id\") pod \"73188696-c109-46f8-985b-6f5e9ef5b787\" (UID: \"73188696-c109-46f8-985b-6f5e9ef5b787\") " Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.214245 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.214365 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73188696-c109-46f8-985b-6f5e9ef5b787-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.219018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.219544 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5" (OuterVolumeSpecName: "kube-api-access-5lmx5") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "kube-api-access-5lmx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.223321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts" (OuterVolumeSpecName: "scripts") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.249957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.272711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data" (OuterVolumeSpecName: "config-data") pod "73188696-c109-46f8-985b-6f5e9ef5b787" (UID: "73188696-c109-46f8-985b-6f5e9ef5b787"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.316090 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.316136 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.316149 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.316160 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73188696-c109-46f8-985b-6f5e9ef5b787-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.316170 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmx5\" (UniqueName: \"kubernetes.io/projected/73188696-c109-46f8-985b-6f5e9ef5b787-kube-api-access-5lmx5\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.639378 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pv8t7" event={"ID":"73188696-c109-46f8-985b-6f5e9ef5b787","Type":"ContainerDied","Data":"c436e5c02c692a0a2ee2b161e62c1e194d14eaef1192bbf2e8e37391ea12f6a0"} Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.639681 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c436e5c02c692a0a2ee2b161e62c1e194d14eaef1192bbf2e8e37391ea12f6a0" Nov 24 12:14:39 crc kubenswrapper[4782]: I1124 12:14:39.639751 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pv8t7" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.113112 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.118686 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="dnsmasq-dns" containerID="cri-o://391730c1a5db20c6a50d3253bdd50cd87fd88a6ded6a3d0f7a728aaed73d00d3" gracePeriod=10 Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.152312 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:14:40 crc kubenswrapper[4782]: E1124 12:14:40.152796 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" containerName="cinder-db-sync" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.152818 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" containerName="cinder-db-sync" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.153053 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" containerName="cinder-db-sync" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.154256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.159598 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.159994 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.160143 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.160986 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.164636 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-958sd" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.169974 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.191555 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.205605 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262232 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262299 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262428 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262490 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262523 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262554 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq47w\" (UniqueName: \"kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262597 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262652 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.262890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6gp\" (UniqueName: \"kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.365605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.365994 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d6gp\" (UniqueName: \"kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366073 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366167 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366230 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366297 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq47w\" (UniqueName: \"kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366333 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.366398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.368404 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.370345 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.370920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.371783 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.371824 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.371983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.383310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.383825 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.384387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.385080 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.404561 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d6gp\" (UniqueName: \"kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp\") pod \"cinder-scheduler-0\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.406976 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq47w\" (UniqueName: \"kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w\") pod \"dnsmasq-dns-674b76c99f-ctw29\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.455525 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.457086 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.469773 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.477882 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.479960 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.496601 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.580628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.583860 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.583945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.584051 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.584677 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.584842 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.584910 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b557w\" (UniqueName: \"kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.688809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.688854 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.688895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.689003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.689045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.689073 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b557w\" (UniqueName: \"kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.689124 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.690094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.690864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.705770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.707136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.708675 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.720713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.730672 4782 generic.go:334] "Generic (PLEG): container finished" podID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerID="391730c1a5db20c6a50d3253bdd50cd87fd88a6ded6a3d0f7a728aaed73d00d3" exitCode=0 Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.731865 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" event={"ID":"9f43951b-2d39-49a9-9b5d-023305c2e89a","Type":"ContainerDied","Data":"391730c1a5db20c6a50d3253bdd50cd87fd88a6ded6a3d0f7a728aaed73d00d3"} Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.788582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b557w\" (UniqueName: \"kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w\") pod \"cinder-api-0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " pod="openstack/cinder-api-0" Nov 24 12:14:40 crc kubenswrapper[4782]: I1124 12:14:40.989314 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.142636 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.213954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.214305 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.214340 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.214438 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbzx\" (UniqueName: \"kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.214496 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.214526 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config\") pod \"9f43951b-2d39-49a9-9b5d-023305c2e89a\" (UID: \"9f43951b-2d39-49a9-9b5d-023305c2e89a\") " Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.238035 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx" (OuterVolumeSpecName: "kube-api-access-hqbzx") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "kube-api-access-hqbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.315854 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqbzx\" (UniqueName: \"kubernetes.io/projected/9f43951b-2d39-49a9-9b5d-023305c2e89a-kube-api-access-hqbzx\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.337723 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.338256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.369135 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.414833 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config" (OuterVolumeSpecName: "config") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.418181 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.418208 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.465926 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.520751 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.523127 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.558034 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f43951b-2d39-49a9-9b5d-023305c2e89a" (UID: "9f43951b-2d39-49a9-9b5d-023305c2e89a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.622416 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.622635 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f43951b-2d39-49a9-9b5d-023305c2e89a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.754548 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.764215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" event={"ID":"0c4a4692-5115-41e2-9e23-4ea11ef21a08","Type":"ContainerStarted","Data":"d14bbfe82397026a8561956f5107230dfe171585c70c63e7cbf14af309b15626"} Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.768437 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerStarted","Data":"14407a6fe4c295a7325529f3c7ad434425f92b3e9103be750bf5c7cf0857267d"} Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.803474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" event={"ID":"9f43951b-2d39-49a9-9b5d-023305c2e89a","Type":"ContainerDied","Data":"a13a077d95a5fc3aa01ba86633644f181abfbb38a46d0b43fccdb6cc7d6ce924"} Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.803528 4782 scope.go:117] "RemoveContainer" containerID="391730c1a5db20c6a50d3253bdd50cd87fd88a6ded6a3d0f7a728aaed73d00d3" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.803671 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-bp26f" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.862843 4782 scope.go:117] "RemoveContainer" containerID="42d3b3054c7c9580dfb0cf064cad8c67bb18aa0399c76905ac8952f95fb5dd41" Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.954423 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:41 crc kubenswrapper[4782]: I1124 12:14:41.986020 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-bp26f"] Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.701335 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.701667 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.714005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.854483 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerStarted","Data":"92a7fe39b930369cfaa7d581684401bd72117804a34bfdabea4dbd0f7a19692e"} Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.862454 4782 generic.go:334] "Generic (PLEG): container finished" podID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerID="54553fe4e728a93e003c02e58a8d2d9a83c284667eb34774a7204da0e7afab54" exitCode=0 Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.864183 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" event={"ID":"0c4a4692-5115-41e2-9e23-4ea11ef21a08","Type":"ContainerDied","Data":"54553fe4e728a93e003c02e58a8d2d9a83c284667eb34774a7204da0e7afab54"} Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.990242 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:42 crc kubenswrapper[4782]: I1124 12:14:42.990334 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.090864 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.408638 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.408693 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.527633 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" path="/var/lib/kubelet/pods/9f43951b-2d39-49a9-9b5d-023305c2e89a/volumes" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.902951 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" event={"ID":"0c4a4692-5115-41e2-9e23-4ea11ef21a08","Type":"ContainerStarted","Data":"c5b87f3b60199dc1a35f26c34af9e02e8c107639fd184f241ba9f12b710f2301"} Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.904243 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.916456 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerStarted","Data":"bf75227df0b2637759e4087e89e420bd7ec1e177ea0255cd970a72b0293dacf0"} Nov 24 12:14:43 crc kubenswrapper[4782]: I1124 12:14:43.938774 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" podStartSLOduration=3.938755021 podStartE2EDuration="3.938755021s" podCreationTimestamp="2025-11-24 12:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:43.935860411 +0000 UTC m=+1133.179694190" watchObservedRunningTime="2025-11-24 12:14:43.938755021 +0000 UTC m=+1133.182588790" Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.521162 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.938104 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerStarted","Data":"1895afb98fec8c2d968a8e68fb5488f753695b2d0f201a00927dd7333d85081e"} Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.938462 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.938273 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" containerID="cri-o://1895afb98fec8c2d968a8e68fb5488f753695b2d0f201a00927dd7333d85081e" gracePeriod=30 Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.938197 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api-log" containerID="cri-o://bf75227df0b2637759e4087e89e420bd7ec1e177ea0255cd970a72b0293dacf0" gracePeriod=30 Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.966007 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.9659881299999995 podStartE2EDuration="4.96598813s" podCreationTimestamp="2025-11-24 12:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:44.960976422 +0000 UTC m=+1134.204810201" watchObservedRunningTime="2025-11-24 12:14:44.96598813 +0000 UTC m=+1134.209821899" Nov 24 12:14:44 crc kubenswrapper[4782]: I1124 12:14:44.974666 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerStarted","Data":"3a86ce44d4c1cd0f854a412fdbf1a5c5819c094b5b7a97047a87185167dd25de"} Nov 24 12:14:45 crc kubenswrapper[4782]: I1124 12:14:45.983785 4782 generic.go:334] "Generic (PLEG): container finished" podID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerID="bf75227df0b2637759e4087e89e420bd7ec1e177ea0255cd970a72b0293dacf0" exitCode=143 Nov 24 12:14:45 crc kubenswrapper[4782]: I1124 12:14:45.983875 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerDied","Data":"bf75227df0b2637759e4087e89e420bd7ec1e177ea0255cd970a72b0293dacf0"} Nov 24 12:14:45 crc kubenswrapper[4782]: I1124 12:14:45.986203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerStarted","Data":"736bc5742537ccad3e282505c7af797b96cd5b3f170b4c1d3d50bce5ee96cbee"} Nov 24 12:14:46 crc kubenswrapper[4782]: I1124 12:14:46.013360 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.677440326 podStartE2EDuration="6.013345292s" podCreationTimestamp="2025-11-24 12:14:40 +0000 UTC" firstStartedPulling="2025-11-24 12:14:41.408353616 +0000 UTC m=+1130.652187385" lastFinishedPulling="2025-11-24 12:14:42.744258582 +0000 UTC m=+1131.988092351" observedRunningTime="2025-11-24 12:14:46.010900645 +0000 UTC m=+1135.254734414" watchObservedRunningTime="2025-11-24 12:14:46.013345292 +0000 UTC m=+1135.257179061" Nov 24 12:14:46 crc kubenswrapper[4782]: I1124 12:14:46.997218 4782 generic.go:334] "Generic (PLEG): container finished" podID="42e30cc3-dd65-45af-82ed-40354098a697" containerID="8edc41d602a47fd694df6ba55cd1e42b6459adf724c2909fc7f6452d04590b73" exitCode=0 Nov 24 12:14:46 crc kubenswrapper[4782]: I1124 12:14:46.997390 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4k2r" event={"ID":"42e30cc3-dd65-45af-82ed-40354098a697","Type":"ContainerDied","Data":"8edc41d602a47fd694df6ba55cd1e42b6459adf724c2909fc7f6452d04590b73"} Nov 24 12:14:47 crc kubenswrapper[4782]: I1124 12:14:47.410780 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:47 crc kubenswrapper[4782]: I1124 12:14:47.410882 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:47 crc kubenswrapper[4782]: I1124 12:14:47.522553 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:47 crc kubenswrapper[4782]: I1124 12:14:47.522747 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.455478 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.492600 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.492596 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.536189 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config\") pod \"42e30cc3-dd65-45af-82ed-40354098a697\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.536337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvj4s\" (UniqueName: \"kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s\") pod \"42e30cc3-dd65-45af-82ed-40354098a697\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.536469 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle\") pod \"42e30cc3-dd65-45af-82ed-40354098a697\" (UID: \"42e30cc3-dd65-45af-82ed-40354098a697\") " Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.549291 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s" (OuterVolumeSpecName: "kube-api-access-nvj4s") pod "42e30cc3-dd65-45af-82ed-40354098a697" (UID: "42e30cc3-dd65-45af-82ed-40354098a697"). InnerVolumeSpecName "kube-api-access-nvj4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.640484 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvj4s\" (UniqueName: \"kubernetes.io/projected/42e30cc3-dd65-45af-82ed-40354098a697-kube-api-access-nvj4s\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.653076 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config" (OuterVolumeSpecName: "config") pod "42e30cc3-dd65-45af-82ed-40354098a697" (UID: "42e30cc3-dd65-45af-82ed-40354098a697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.661529 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e30cc3-dd65-45af-82ed-40354098a697" (UID: "42e30cc3-dd65-45af-82ed-40354098a697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.742280 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:48 crc kubenswrapper[4782]: I1124 12:14:48.742320 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e30cc3-dd65-45af-82ed-40354098a697-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.059943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4k2r" event={"ID":"42e30cc3-dd65-45af-82ed-40354098a697","Type":"ContainerDied","Data":"8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd"} Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.060192 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef0f840a4b19ced7d2553893f3df0dfad3218498d889854ebf813031b3d47cd" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.060274 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4k2r" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.314337 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.314572 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="dnsmasq-dns" containerID="cri-o://c5b87f3b60199dc1a35f26c34af9e02e8c107639fd184f241ba9f12b710f2301" gracePeriod=10 Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.325183 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.477439 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:14:49 crc kubenswrapper[4782]: E1124 12:14:49.477892 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="init" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.477910 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="init" Nov 24 12:14:49 crc kubenswrapper[4782]: E1124 12:14:49.477932 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e30cc3-dd65-45af-82ed-40354098a697" containerName="neutron-db-sync" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.477940 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e30cc3-dd65-45af-82ed-40354098a697" containerName="neutron-db-sync" Nov 24 12:14:49 crc kubenswrapper[4782]: E1124 12:14:49.477963 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="dnsmasq-dns" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.477971 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="dnsmasq-dns" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.478175 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f43951b-2d39-49a9-9b5d-023305c2e89a" containerName="dnsmasq-dns" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.478199 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e30cc3-dd65-45af-82ed-40354098a697" containerName="neutron-db-sync" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.479345 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.573985 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574090 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc58r\" (UniqueName: \"kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574275 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.574417 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676530 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676715 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676760 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc58r\" (UniqueName: \"kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.676826 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.678362 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.682123 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.682796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.683036 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.693957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.705022 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.710447 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.721922 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.722160 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.722315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-58nxl" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.723050 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.756995 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc58r\" (UniqueName: \"kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r\") pod \"dnsmasq-dns-6bb4fc677f-xs4pw\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.778415 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qz7d\" (UniqueName: \"kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.778563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.778623 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.778659 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.778680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.785930 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.855781 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.880861 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qz7d\" (UniqueName: \"kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.880998 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.881054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.881090 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.881127 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.893574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.899792 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.911873 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.930903 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:49 crc kubenswrapper[4782]: I1124 12:14:49.937105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qz7d\" (UniqueName: \"kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d\") pod \"neutron-67ff6979bd-w7crx\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.106924 4782 generic.go:334] "Generic (PLEG): container finished" podID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerID="c5b87f3b60199dc1a35f26c34af9e02e8c107639fd184f241ba9f12b710f2301" exitCode=0 Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.107287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" event={"ID":"0c4a4692-5115-41e2-9e23-4ea11ef21a08","Type":"ContainerDied","Data":"c5b87f3b60199dc1a35f26c34af9e02e8c107639fd184f241ba9f12b710f2301"} Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.140851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.478994 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.489491 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.616742 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:14:50 crc kubenswrapper[4782]: I1124 12:14:50.820309 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.012523 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.012875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.012908 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.013014 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq47w\" (UniqueName: \"kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.013070 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.013112 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0\") pod \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\" (UID: \"0c4a4692-5115-41e2-9e23-4ea11ef21a08\") " Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.023560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w" (OuterVolumeSpecName: "kube-api-access-qq47w") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "kube-api-access-qq47w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.116098 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq47w\" (UniqueName: \"kubernetes.io/projected/0c4a4692-5115-41e2-9e23-4ea11ef21a08-kube-api-access-qq47w\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.120608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.126884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" event={"ID":"cc6257af-a928-420b-a8cb-4a174b2a5776","Type":"ContainerStarted","Data":"53f44ad998adb08a0f7bca42beaa1e129ede4437a69216c4ebf181712a8a9a1a"} Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.129539 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.130922 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config" (OuterVolumeSpecName: "config") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.173763 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" event={"ID":"0c4a4692-5115-41e2-9e23-4ea11ef21a08","Type":"ContainerDied","Data":"d14bbfe82397026a8561956f5107230dfe171585c70c63e7cbf14af309b15626"} Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.173811 4782 scope.go:117] "RemoveContainer" containerID="c5b87f3b60199dc1a35f26c34af9e02e8c107639fd184f241ba9f12b710f2301" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.173993 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.182617 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.183561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c4a4692-5115-41e2-9e23-4ea11ef21a08" (UID: "0c4a4692-5115-41e2-9e23-4ea11ef21a08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.217649 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.217691 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.217705 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.217716 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.217729 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c4a4692-5115-41e2-9e23-4ea11ef21a08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.232633 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:14:51 crc kubenswrapper[4782]: W1124 12:14:51.236791 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31ab3120_bee9_41b1_b9cc_61b5a953945e.slice/crio-cbe09d3b7a93b49959719ecacc963a888c20a8c4a9529009a18463606b02e48d WatchSource:0}: Error finding container cbe09d3b7a93b49959719ecacc963a888c20a8c4a9529009a18463606b02e48d: Status 404 returned error can't find the container with id cbe09d3b7a93b49959719ecacc963a888c20a8c4a9529009a18463606b02e48d Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.249314 4782 scope.go:117] "RemoveContainer" containerID="54553fe4e728a93e003c02e58a8d2d9a83c284667eb34774a7204da0e7afab54" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.570364 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.575159 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.707018 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:51 crc kubenswrapper[4782]: I1124 12:14:51.719650 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-ctw29"] Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.184708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerStarted","Data":"4dace6148af2078893cc1dd8b0c7708dcbe77672ffa13edd22156610c3163909"} Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.185046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerStarted","Data":"2cbc8f36f42299eecf78fef8eec96e38465157432d98d5765a72a643d8411bf1"} Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.185058 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerStarted","Data":"cbe09d3b7a93b49959719ecacc963a888c20a8c4a9529009a18463606b02e48d"} Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.185178 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.189835 4782 generic.go:334] "Generic (PLEG): container finished" podID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerID="6ca325d700f653a70d84930b48978ea7613d06b9fbef9bcec2909af6e729bf38" exitCode=0 Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.189864 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" event={"ID":"cc6257af-a928-420b-a8cb-4a174b2a5776","Type":"ContainerDied","Data":"6ca325d700f653a70d84930b48978ea7613d06b9fbef9bcec2909af6e729bf38"} Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.276607 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.277862 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67ff6979bd-w7crx" podStartSLOduration=3.277846013 podStartE2EDuration="3.277846013s" podCreationTimestamp="2025-11-24 12:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:52.261873884 +0000 UTC m=+1141.505707673" watchObservedRunningTime="2025-11-24 12:14:52.277846013 +0000 UTC m=+1141.521679782" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.456518 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.480282 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.544579 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:52 crc kubenswrapper[4782]: I1124 12:14:52.544640 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:53 crc kubenswrapper[4782]: I1124 12:14:53.207828 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" event={"ID":"cc6257af-a928-420b-a8cb-4a174b2a5776","Type":"ContainerStarted","Data":"bd9d61de05614afc65b4c05996e406311f74abe42f44f4ecf3eb8d9612f2add6"} Nov 24 12:14:53 crc kubenswrapper[4782]: I1124 12:14:53.208903 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:53 crc kubenswrapper[4782]: I1124 12:14:53.238798 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" podStartSLOduration=4.2387809690000005 podStartE2EDuration="4.238780969s" podCreationTimestamp="2025-11-24 12:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:53.233872534 +0000 UTC m=+1142.477706303" watchObservedRunningTime="2025-11-24 12:14:53.238780969 +0000 UTC m=+1142.482614738" Nov 24 12:14:53 crc kubenswrapper[4782]: I1124 12:14:53.500293 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" path="/var/lib/kubelet/pods/0c4a4692-5115-41e2-9e23-4ea11ef21a08/volumes" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.913979 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-665949bbb5-7lm9x"] Nov 24 12:14:54 crc kubenswrapper[4782]: E1124 12:14:54.914609 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="dnsmasq-dns" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.914623 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="dnsmasq-dns" Nov 24 12:14:54 crc kubenswrapper[4782]: E1124 12:14:54.914635 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="init" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.914640 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="init" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.916142 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="dnsmasq-dns" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.917162 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.921531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-665949bbb5-7lm9x"] Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.934115 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 12:14:54 crc kubenswrapper[4782]: I1124 12:14:54.939822 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-internal-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072766 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-combined-ca-bundle\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-httpd-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-ovndb-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072869 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-public-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.072948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vn5\" (UniqueName: \"kubernetes.io/projected/6046c36e-6c5a-49e4-850b-d15d227c7851-kube-api-access-k5vn5\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174648 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174733 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vn5\" (UniqueName: \"kubernetes.io/projected/6046c36e-6c5a-49e4-850b-d15d227c7851-kube-api-access-k5vn5\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-internal-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174830 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-combined-ca-bundle\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174888 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-httpd-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174933 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-ovndb-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.174953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-public-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.182256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-internal-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.185221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-combined-ca-bundle\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.185902 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-public-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.186184 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.186654 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-httpd-config\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.194093 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6046c36e-6c5a-49e4-850b-d15d227c7851-ovndb-tls-certs\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.204149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vn5\" (UniqueName: \"kubernetes.io/projected/6046c36e-6c5a-49e4-850b-d15d227c7851-kube-api-access-k5vn5\") pod \"neutron-665949bbb5-7lm9x\" (UID: \"6046c36e-6c5a-49e4-850b-d15d227c7851\") " pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.251225 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.424925 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:55 crc kubenswrapper[4782]: I1124 12:14:55.500464 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-674b76c99f-ctw29" podUID="0c4a4692-5115-41e2-9e23-4ea11ef21a08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: i/o timeout" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.037854 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.187154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-665949bbb5-7lm9x"] Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.240786 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-665949bbb5-7lm9x" event={"ID":"6046c36e-6c5a-49e4-850b-d15d227c7851","Type":"ContainerStarted","Data":"47a024552f07804e69dff5610e5a6a776acea75758bcc1eeddbe9afaaaa89082"} Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.520890 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.537402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548457c99b-pdf6j" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.577218 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.582531 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584bf45bd-6fhhg" podUID="de56c6c9-b982-419d-be5c-97f1f9379747" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.621835 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.766886 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5584bf45bd-6fhhg" Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.881303 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.882048 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" containerID="cri-o://2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853" gracePeriod=30 Nov 24 12:14:56 crc kubenswrapper[4782]: I1124 12:14:56.882215 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" containerID="cri-o://37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82" gracePeriod=30 Nov 24 12:14:57 crc kubenswrapper[4782]: I1124 12:14:57.252293 4782 generic.go:334] "Generic (PLEG): container finished" podID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerID="37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82" exitCode=143 Nov 24 12:14:57 crc kubenswrapper[4782]: I1124 12:14:57.252395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerDied","Data":"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82"} Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.306938 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-665949bbb5-7lm9x" event={"ID":"6046c36e-6c5a-49e4-850b-d15d227c7851","Type":"ContainerStarted","Data":"f7a9487420f875e2d137627d33db7accc51ab0c47fd11dd791f7ffe887d5ba65"} Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.307538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-665949bbb5-7lm9x" event={"ID":"6046c36e-6c5a-49e4-850b-d15d227c7851","Type":"ContainerStarted","Data":"af5c11ec8da7b901b7eef37589c85fd0caa27f72d8e7e5e30c3a148530f0b1d1"} Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.307578 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.336262 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-665949bbb5-7lm9x" podStartSLOduration=4.336236817 podStartE2EDuration="4.336236817s" podCreationTimestamp="2025-11-24 12:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:14:58.329851712 +0000 UTC m=+1147.573685491" watchObservedRunningTime="2025-11-24 12:14:58.336236817 +0000 UTC m=+1147.580070586" Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.342588 4782 generic.go:334] "Generic (PLEG): container finished" podID="b6cd757b-7259-4caf-b928-2dc936c99028" containerID="4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8" exitCode=137 Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.342683 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerDied","Data":"4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8"} Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.342708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerStarted","Data":"fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc"} Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.358065 4782 generic.go:334] "Generic (PLEG): container finished" podID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerID="a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a" exitCode=137 Nov 24 12:14:58 crc kubenswrapper[4782]: I1124 12:14:58.358111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerDied","Data":"a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a"} Nov 24 12:14:59 crc kubenswrapper[4782]: I1124 12:14:59.249818 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54579c9c49-nkmgh" Nov 24 12:14:59 crc kubenswrapper[4782]: I1124 12:14:59.383019 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerStarted","Data":"125ebd65dabcd15c20027fc4e84b06ecd573dd8fcdb0657e8af1622f5c0d2bbf"} Nov 24 12:14:59 crc kubenswrapper[4782]: I1124 12:14:59.858607 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:14:59 crc kubenswrapper[4782]: I1124 12:14:59.955467 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:14:59 crc kubenswrapper[4782]: I1124 12:14:59.955745 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="dnsmasq-dns" containerID="cri-o://51c4fcde52b0aec66588b5115a9d37ef1c91db7055616a8e59f417080e43ebb6" gracePeriod=10 Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.152727 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867"] Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.156270 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.167616 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.169430 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.169630 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.197964 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867"] Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.297671 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.297731 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.297777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldm7\" (UniqueName: \"kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.392841 4782 generic.go:334] "Generic (PLEG): container finished" podID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerID="51c4fcde52b0aec66588b5115a9d37ef1c91db7055616a8e59f417080e43ebb6" exitCode=0 Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.392896 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" event={"ID":"6c88e9bd-ca23-4af3-b79f-40ed8871dd16","Type":"ContainerDied","Data":"51c4fcde52b0aec66588b5115a9d37ef1c91db7055616a8e59f417080e43ebb6"} Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.400072 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.400130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.400174 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldm7\" (UniqueName: \"kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.401139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.407220 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.412421 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.412484 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.412542 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.413361 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.413464 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16" gracePeriod=600 Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.425479 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldm7\" (UniqueName: \"kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7\") pod \"collect-profiles-29399775-nj867\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.503994 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.536839 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.559858 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.939272 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.949810 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.958105 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.963757 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6rztz" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.965163 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 12:15:00 crc kubenswrapper[4782]: I1124 12:15:00.976716 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.013749 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89pg\" (UniqueName: \"kubernetes.io/projected/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-kube-api-access-z89pg\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.013822 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.013890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.013938 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.082304 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.116223 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z89pg\" (UniqueName: \"kubernetes.io/projected/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-kube-api-access-z89pg\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.116293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.116400 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.116467 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.120161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.126158 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.145575 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.145909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z89pg\" (UniqueName: \"kubernetes.io/projected/c7c7aa63-55ae-4525-a262-c5c9d08e4fe7-kube-api-access-z89pg\") pod \"openstackclient\" (UID: \"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7\") " pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.297775 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.448067 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16" exitCode=0 Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.448268 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" containerID="cri-o://3a86ce44d4c1cd0f854a412fdbf1a5c5819c094b5b7a97047a87185167dd25de" gracePeriod=30 Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.452484 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16"} Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.452509 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="probe" containerID="cri-o://736bc5742537ccad3e282505c7af797b96cd5b3f170b4c1d3d50bce5ee96cbee" gracePeriod=30 Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.452552 4782 scope.go:117] "RemoveContainer" containerID="5948f238852b206092c207d3cf86760b27f85d8ef83dc51c3375bc2e50a4023a" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.452539 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095"} Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.560075 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867"] Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.600704 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.736845 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.736902 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.737013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.737067 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.737162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.737225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv87p\" (UniqueName: \"kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p\") pod \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\" (UID: \"6c88e9bd-ca23-4af3-b79f-40ed8871dd16\") " Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.763569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p" (OuterVolumeSpecName: "kube-api-access-gv87p") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "kube-api-access-gv87p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.839555 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv87p\" (UniqueName: \"kubernetes.io/projected/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-kube-api-access-gv87p\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.880720 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.912329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.912844 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.943882 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.944315 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config" (OuterVolumeSpecName: "config") pod "6c88e9bd-ca23-4af3-b79f-40ed8871dd16" (UID: "6c88e9bd-ca23-4af3-b79f-40ed8871dd16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.945593 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.945620 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.945637 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.945648 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:01 crc kubenswrapper[4782]: I1124 12:15:01.945660 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c88e9bd-ca23-4af3-b79f-40ed8871dd16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.200139 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.410569 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.410581 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.459330 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.459353 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" event={"ID":"6c88e9bd-ca23-4af3-b79f-40ed8871dd16","Type":"ContainerDied","Data":"1d2b87db504dbe08fbe70d21b30daa1d14cf94bc33e3c4529232d48749500f1c"} Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.459789 4782 scope.go:117] "RemoveContainer" containerID="51c4fcde52b0aec66588b5115a9d37ef1c91db7055616a8e59f417080e43ebb6" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.467267 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" event={"ID":"60f6fefd-3f98-44db-b4cf-05debe821489","Type":"ContainerStarted","Data":"92454e80fcb8515f07b349d7e0d6d1182992bc2066f89e1b4bf81f9a23c4ef86"} Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.467318 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" event={"ID":"60f6fefd-3f98-44db-b4cf-05debe821489","Type":"ContainerStarted","Data":"1f84f6a2dca7f880ece4f370ce3960002c2ff8adc28203894a8eb0916ff02a53"} Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.471728 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7","Type":"ContainerStarted","Data":"528e0ec0dabcd0eabc7c906d3afd56608544747b4c9db6d44da4b3472023759f"} Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.489740 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" podStartSLOduration=2.489720707 podStartE2EDuration="2.489720707s" podCreationTimestamp="2025-11-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:02.487775424 +0000 UTC m=+1151.731609203" watchObservedRunningTime="2025-11-24 12:15:02.489720707 +0000 UTC m=+1151.733554476" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.497566 4782 scope.go:117] "RemoveContainer" containerID="b545d1381f61ad9dec6b976417a404d1a5c2c0be51f251e14d1b1ab36611888b" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.528502 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.540707 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vwmjc"] Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.668227 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:40886->10.217.0.159:9311: read: connection reset by peer" Nov 24 12:15:02 crc kubenswrapper[4782]: I1124 12:15:02.668584 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-754ccd5b54-bt86q" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:40870->10.217.0.159:9311: read: connection reset by peer" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.418975 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.478864 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom\") pod \"94a02614-e0c0-4091-bf8e-5f660831e8cd\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.478945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data\") pod \"94a02614-e0c0-4091-bf8e-5f660831e8cd\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.479119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs\") pod \"94a02614-e0c0-4091-bf8e-5f660831e8cd\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.479150 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle\") pod \"94a02614-e0c0-4091-bf8e-5f660831e8cd\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.479227 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzh6\" (UniqueName: \"kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6\") pod \"94a02614-e0c0-4091-bf8e-5f660831e8cd\" (UID: \"94a02614-e0c0-4091-bf8e-5f660831e8cd\") " Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.479749 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs" (OuterVolumeSpecName: "logs") pod "94a02614-e0c0-4091-bf8e-5f660831e8cd" (UID: "94a02614-e0c0-4091-bf8e-5f660831e8cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.496755 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94a02614-e0c0-4091-bf8e-5f660831e8cd" (UID: "94a02614-e0c0-4091-bf8e-5f660831e8cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.517353 4782 generic.go:334] "Generic (PLEG): container finished" podID="60f6fefd-3f98-44db-b4cf-05debe821489" containerID="92454e80fcb8515f07b349d7e0d6d1182992bc2066f89e1b4bf81f9a23c4ef86" exitCode=0 Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.520660 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6" (OuterVolumeSpecName: "kube-api-access-xrzh6") pod "94a02614-e0c0-4091-bf8e-5f660831e8cd" (UID: "94a02614-e0c0-4091-bf8e-5f660831e8cd"). InnerVolumeSpecName "kube-api-access-xrzh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.523274 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" path="/var/lib/kubelet/pods/6c88e9bd-ca23-4af3-b79f-40ed8871dd16/volumes" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.527010 4782 generic.go:334] "Generic (PLEG): container finished" podID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerID="2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853" exitCode=0 Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.527279 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754ccd5b54-bt86q" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.550841 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a02614-e0c0-4091-bf8e-5f660831e8cd" (UID: "94a02614-e0c0-4091-bf8e-5f660831e8cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.568819 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerID="736bc5742537ccad3e282505c7af797b96cd5b3f170b4c1d3d50bce5ee96cbee" exitCode=0 Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.591136 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrzh6\" (UniqueName: \"kubernetes.io/projected/94a02614-e0c0-4091-bf8e-5f660831e8cd-kube-api-access-xrzh6\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.591200 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.591213 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a02614-e0c0-4091-bf8e-5f660831e8cd-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.591222 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.638205 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" event={"ID":"60f6fefd-3f98-44db-b4cf-05debe821489","Type":"ContainerDied","Data":"92454e80fcb8515f07b349d7e0d6d1182992bc2066f89e1b4bf81f9a23c4ef86"} Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.638441 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerDied","Data":"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853"} Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.638537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754ccd5b54-bt86q" event={"ID":"94a02614-e0c0-4091-bf8e-5f660831e8cd","Type":"ContainerDied","Data":"5d55a65266ded68a44e79f73030f56dbf66cf409fbde5e42b4817baac2077243"} Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.638605 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerDied","Data":"736bc5742537ccad3e282505c7af797b96cd5b3f170b4c1d3d50bce5ee96cbee"} Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.638708 4782 scope.go:117] "RemoveContainer" containerID="2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.716455 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data" (OuterVolumeSpecName: "config-data") pod "94a02614-e0c0-4091-bf8e-5f660831e8cd" (UID: "94a02614-e0c0-4091-bf8e-5f660831e8cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.730479 4782 scope.go:117] "RemoveContainer" containerID="37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.779129 4782 scope.go:117] "RemoveContainer" containerID="2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853" Nov 24 12:15:03 crc kubenswrapper[4782]: E1124 12:15:03.779911 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853\": container with ID starting with 2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853 not found: ID does not exist" containerID="2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.779951 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853"} err="failed to get container status \"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853\": rpc error: code = NotFound desc = could not find container \"2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853\": container with ID starting with 2baf180d7f5736e84cc607735d3bccaea9eab02470a164de77ff3fc8cf903853 not found: ID does not exist" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.780012 4782 scope.go:117] "RemoveContainer" containerID="37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82" Nov 24 12:15:03 crc kubenswrapper[4782]: E1124 12:15:03.780472 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82\": container with ID starting with 37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82 not found: ID does not exist" containerID="37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.780503 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82"} err="failed to get container status \"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82\": rpc error: code = NotFound desc = could not find container \"37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82\": container with ID starting with 37b07b722974890a18f9a9fffdb5a88917e30726da722d7b6c0b49c105677d82 not found: ID does not exist" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.801330 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a02614-e0c0-4091-bf8e-5f660831e8cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.873794 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:15:03 crc kubenswrapper[4782]: I1124 12:15:03.881246 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-754ccd5b54-bt86q"] Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.593470 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerID="3a86ce44d4c1cd0f854a412fdbf1a5c5819c094b5b7a97047a87185167dd25de" exitCode=0 Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.593527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerDied","Data":"3a86ce44d4c1cd0f854a412fdbf1a5c5819c094b5b7a97047a87185167dd25de"} Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.793907 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.830784 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.830885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831001 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831061 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d6gp\" (UniqueName: \"kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831080 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831126 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom\") pod \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\" (UID: \"1d8027c1-e92f-4e6c-b07d-49c24cee85c7\") " Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.831455 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.839784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts" (OuterVolumeSpecName: "scripts") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.852671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.852806 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp" (OuterVolumeSpecName: "kube-api-access-5d6gp") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "kube-api-access-5d6gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.947698 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.953928 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d6gp\" (UniqueName: \"kubernetes.io/projected/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-kube-api-access-5d6gp\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.953959 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.953972 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4782]: I1124 12:15:04.981927 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.059510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume\") pod \"60f6fefd-3f98-44db-b4cf-05debe821489\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.059582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume\") pod \"60f6fefd-3f98-44db-b4cf-05debe821489\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.059741 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldm7\" (UniqueName: \"kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7\") pod \"60f6fefd-3f98-44db-b4cf-05debe821489\" (UID: \"60f6fefd-3f98-44db-b4cf-05debe821489\") " Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.064063 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume" (OuterVolumeSpecName: "config-volume") pod "60f6fefd-3f98-44db-b4cf-05debe821489" (UID: "60f6fefd-3f98-44db-b4cf-05debe821489"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.071262 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.071296 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f6fefd-3f98-44db-b4cf-05debe821489-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.080572 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60f6fefd-3f98-44db-b4cf-05debe821489" (UID: "60f6fefd-3f98-44db-b4cf-05debe821489"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.094711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7" (OuterVolumeSpecName: "kube-api-access-7ldm7") pod "60f6fefd-3f98-44db-b4cf-05debe821489" (UID: "60f6fefd-3f98-44db-b4cf-05debe821489"). InnerVolumeSpecName "kube-api-access-7ldm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.100784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data" (OuterVolumeSpecName: "config-data") pod "1d8027c1-e92f-4e6c-b07d-49c24cee85c7" (UID: "1d8027c1-e92f-4e6c-b07d-49c24cee85c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.172880 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldm7\" (UniqueName: \"kubernetes.io/projected/60f6fefd-3f98-44db-b4cf-05debe821489-kube-api-access-7ldm7\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.172988 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8027c1-e92f-4e6c-b07d-49c24cee85c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.173000 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f6fefd-3f98-44db-b4cf-05debe821489-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.505284 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" path="/var/lib/kubelet/pods/94a02614-e0c0-4091-bf8e-5f660831e8cd/volumes" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.613891 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d8027c1-e92f-4e6c-b07d-49c24cee85c7","Type":"ContainerDied","Data":"14407a6fe4c295a7325529f3c7ad434425f92b3e9103be750bf5c7cf0857267d"} Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.613946 4782 scope.go:117] "RemoveContainer" containerID="736bc5742537ccad3e282505c7af797b96cd5b3f170b4c1d3d50bce5ee96cbee" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.614096 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.623928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" event={"ID":"60f6fefd-3f98-44db-b4cf-05debe821489","Type":"ContainerDied","Data":"1f84f6a2dca7f880ece4f370ce3960002c2ff8adc28203894a8eb0916ff02a53"} Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.623999 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f84f6a2dca7f880ece4f370ce3960002c2ff8adc28203894a8eb0916ff02a53" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.624091 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.651280 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.680322 4782 scope.go:117] "RemoveContainer" containerID="3a86ce44d4c1cd0f854a412fdbf1a5c5819c094b5b7a97047a87185167dd25de" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.687436 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.702934 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703285 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.703302 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703315 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.703322 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703332 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f6fefd-3f98-44db-b4cf-05debe821489" containerName="collect-profiles" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.703338 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f6fefd-3f98-44db-b4cf-05debe821489" containerName="collect-profiles" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703348 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="dnsmasq-dns" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.703354 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="dnsmasq-dns" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703365 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="init" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.703383 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="init" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.703396 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.707639 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" Nov 24 12:15:05 crc kubenswrapper[4782]: E1124 12:15:05.707710 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="probe" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.707721 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="probe" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708064 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f6fefd-3f98-44db-b4cf-05debe821489" containerName="collect-profiles" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708086 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="dnsmasq-dns" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708096 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="cinder-scheduler" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708111 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708125 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" containerName="probe" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.708137 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a02614-e0c0-4091-bf8e-5f660831e8cd" containerName="barbican-api-log" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.709633 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.714859 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.735131 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.803792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mj6\" (UniqueName: \"kubernetes.io/projected/611df7d1-ff5a-4747-b3ed-be19deedd3c6-kube-api-access-k8mj6\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.803878 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.803943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.803985 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/611df7d1-ff5a-4747-b3ed-be19deedd3c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.804048 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.804132 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.905682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mj6\" (UniqueName: \"kubernetes.io/projected/611df7d1-ff5a-4747-b3ed-be19deedd3c6-kube-api-access-k8mj6\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.906015 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.906068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.906087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/611df7d1-ff5a-4747-b3ed-be19deedd3c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.906140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.906181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.907096 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/611df7d1-ff5a-4747-b3ed-be19deedd3c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.922160 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.922757 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.924224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.925341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611df7d1-ff5a-4747-b3ed-be19deedd3c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:05 crc kubenswrapper[4782]: I1124 12:15:05.927159 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mj6\" (UniqueName: \"kubernetes.io/projected/611df7d1-ff5a-4747-b3ed-be19deedd3c6-kube-api-access-k8mj6\") pod \"cinder-scheduler-0\" (UID: \"611df7d1-ff5a-4747-b3ed-be19deedd3c6\") " pod="openstack/cinder-scheduler-0" Nov 24 12:15:06 crc kubenswrapper[4782]: I1124 12:15:06.061315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:15:06 crc kubenswrapper[4782]: I1124 12:15:06.118056 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:06 crc kubenswrapper[4782]: I1124 12:15:06.535497 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-vwmjc" podUID="6c88e9bd-ca23-4af3-b79f-40ed8871dd16" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Nov 24 12:15:06 crc kubenswrapper[4782]: I1124 12:15:06.690027 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.502137 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8027c1-e92f-4e6c-b07d-49c24cee85c7" path="/var/lib/kubelet/pods/1d8027c1-e92f-4e6c-b07d-49c24cee85c7/volumes" Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.650566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"611df7d1-ff5a-4747-b3ed-be19deedd3c6","Type":"ContainerStarted","Data":"8dfd1415899bbc81f844c81c61279b58e408ba99bde827265a9f8e3e10376029"} Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.650867 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"611df7d1-ff5a-4747-b3ed-be19deedd3c6","Type":"ContainerStarted","Data":"f93d1fe5476f3cc3cdc2a68c7026033e3b7d15a516002af688b7b12026b8bf79"} Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.660620 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.661090 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.765628 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:15:07 crc kubenswrapper[4782]: I1124 12:15:07.766479 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:15:08 crc kubenswrapper[4782]: I1124 12:15:08.695959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"611df7d1-ff5a-4747-b3ed-be19deedd3c6","Type":"ContainerStarted","Data":"9b7dc7c3fed71b2b35c41310a34c0d3584747476eb61dfef2762aebb02ebf0c3"} Nov 24 12:15:08 crc kubenswrapper[4782]: I1124 12:15:08.725853 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.725831918 podStartE2EDuration="3.725831918s" podCreationTimestamp="2025-11-24 12:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:08.718648031 +0000 UTC m=+1157.962481800" watchObservedRunningTime="2025-11-24 12:15:08.725831918 +0000 UTC m=+1157.969665687" Nov 24 12:15:08 crc kubenswrapper[4782]: I1124 12:15:08.870813 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.662128 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-556bd89d59-52m2m"] Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.663851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.671606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-556bd89d59-52m2m"] Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.672113 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.672319 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.672515 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.811829 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-internal-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.811902 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-config-data\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.811926 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-public-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.812001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-log-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.812026 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-etc-swift\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.812063 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-run-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.812089 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-combined-ca-bundle\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.812112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qpz\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-kube-api-access-w2qpz\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.913899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-run-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.913949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-combined-ca-bundle\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.913973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qpz\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-kube-api-access-w2qpz\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.914025 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-internal-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.914059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-config-data\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.914871 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-run-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.914080 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-public-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.915048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-log-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.915083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-etc-swift\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.915809 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-log-httpd\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.921720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-public-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.922084 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-etc-swift\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.935057 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-config-data\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.936152 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-internal-tls-certs\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.936506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-combined-ca-bundle\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.949054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qpz\" (UniqueName: \"kubernetes.io/projected/a2fa4f6f-fc43-4b5c-af94-0534b54364d7-kube-api-access-w2qpz\") pod \"swift-proxy-556bd89d59-52m2m\" (UID: \"a2fa4f6f-fc43-4b5c-af94-0534b54364d7\") " pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:09 crc kubenswrapper[4782]: I1124 12:15:09.982358 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:10 crc kubenswrapper[4782]: I1124 12:15:10.857991 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-556bd89d59-52m2m"] Nov 24 12:15:10 crc kubenswrapper[4782]: W1124 12:15:10.868334 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2fa4f6f_fc43_4b5c_af94_0534b54364d7.slice/crio-9f4e2aff60ad551b5f3549856a0a8f35c3137b0b5f03200d67d0deb72b171e68 WatchSource:0}: Error finding container 9f4e2aff60ad551b5f3549856a0a8f35c3137b0b5f03200d67d0deb72b171e68: Status 404 returned error can't find the container with id 9f4e2aff60ad551b5f3549856a0a8f35c3137b0b5f03200d67d0deb72b171e68 Nov 24 12:15:11 crc kubenswrapper[4782]: I1124 12:15:11.061472 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:15:11 crc kubenswrapper[4782]: I1124 12:15:11.784541 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-556bd89d59-52m2m" event={"ID":"a2fa4f6f-fc43-4b5c-af94-0534b54364d7","Type":"ContainerStarted","Data":"91fbedd9a7c3bd6a3c1eea1ab7d8bd5a280e800665728872c2f2b97c3cd8c293"} Nov 24 12:15:11 crc kubenswrapper[4782]: I1124 12:15:11.784818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-556bd89d59-52m2m" event={"ID":"a2fa4f6f-fc43-4b5c-af94-0534b54364d7","Type":"ContainerStarted","Data":"9f4e2aff60ad551b5f3549856a0a8f35c3137b0b5f03200d67d0deb72b171e68"} Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.591439 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.591771 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-central-agent" containerID="cri-o://8265cec06497a9db9812e78f634b8ed1257ef9ed4e28b2318e56dd87ac6ef87d" gracePeriod=30 Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.592238 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="proxy-httpd" containerID="cri-o://ed407cd4ecbed8c12cb5475a7baad02beee17963944921dce1c10e088f1e294f" gracePeriod=30 Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.592291 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="sg-core" containerID="cri-o://059f9b11b21838144136d7d458556169521d3cbec5765b4f6cb35ea25b1b9dc1" gracePeriod=30 Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.592353 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-notification-agent" containerID="cri-o://eac58ec35a684dc2a79e654a7ab4e803740c61b7e7006d7588160888952b0344" gracePeriod=30 Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.805052 4782 generic.go:334] "Generic (PLEG): container finished" podID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerID="059f9b11b21838144136d7d458556169521d3cbec5765b4f6cb35ea25b1b9dc1" exitCode=2 Nov 24 12:15:12 crc kubenswrapper[4782]: I1124 12:15:12.805471 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerDied","Data":"059f9b11b21838144136d7d458556169521d3cbec5765b4f6cb35ea25b1b9dc1"} Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.623286 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.626409 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-log" containerID="cri-o://39d9858c06caf43bf4b860c62d64ae98ad1b60241013e4b16e462968e64000dc" gracePeriod=30 Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.626509 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-httpd" containerID="cri-o://5a16d7ae2626ec216e00a63076bd7ac1bd2e8be464407a67767d2413251d272b" gracePeriod=30 Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.829148 4782 generic.go:334] "Generic (PLEG): container finished" podID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerID="39d9858c06caf43bf4b860c62d64ae98ad1b60241013e4b16e462968e64000dc" exitCode=143 Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.829228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerDied","Data":"39d9858c06caf43bf4b860c62d64ae98ad1b60241013e4b16e462968e64000dc"} Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.833125 4782 generic.go:334] "Generic (PLEG): container finished" podID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerID="ed407cd4ecbed8c12cb5475a7baad02beee17963944921dce1c10e088f1e294f" exitCode=0 Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.833156 4782 generic.go:334] "Generic (PLEG): container finished" podID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerID="8265cec06497a9db9812e78f634b8ed1257ef9ed4e28b2318e56dd87ac6ef87d" exitCode=0 Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.833175 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerDied","Data":"ed407cd4ecbed8c12cb5475a7baad02beee17963944921dce1c10e088f1e294f"} Nov 24 12:15:13 crc kubenswrapper[4782]: I1124 12:15:13.833200 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerDied","Data":"8265cec06497a9db9812e78f634b8ed1257ef9ed4e28b2318e56dd87ac6ef87d"} Nov 24 12:15:14 crc kubenswrapper[4782]: I1124 12:15:14.716709 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:14 crc kubenswrapper[4782]: I1124 12:15:14.718149 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-httpd" containerID="cri-o://6951aeeec7edb151fecd0156a9a76701e6509e4d3fe354c7be7d68f9407eb02a" gracePeriod=30 Nov 24 12:15:14 crc kubenswrapper[4782]: I1124 12:15:14.718307 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-log" containerID="cri-o://3d852f0ace2cb409980731475d75e86bd6e31fbde626b0acec9e1502d54ed6f4" gracePeriod=30 Nov 24 12:15:14 crc kubenswrapper[4782]: I1124 12:15:14.849113 4782 generic.go:334] "Generic (PLEG): container finished" podID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerID="3d852f0ace2cb409980731475d75e86bd6e31fbde626b0acec9e1502d54ed6f4" exitCode=143 Nov 24 12:15:14 crc kubenswrapper[4782]: I1124 12:15:14.849151 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerDied","Data":"3d852f0ace2cb409980731475d75e86bd6e31fbde626b0acec9e1502d54ed6f4"} Nov 24 12:15:15 crc kubenswrapper[4782]: I1124 12:15:15.881736 4782 generic.go:334] "Generic (PLEG): container finished" podID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerID="1895afb98fec8c2d968a8e68fb5488f753695b2d0f201a00927dd7333d85081e" exitCode=137 Nov 24 12:15:15 crc kubenswrapper[4782]: I1124 12:15:15.881981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerDied","Data":"1895afb98fec8c2d968a8e68fb5488f753695b2d0f201a00927dd7333d85081e"} Nov 24 12:15:15 crc kubenswrapper[4782]: I1124 12:15:15.990724 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": dial tcp 10.217.0.163:8776: connect: connection refused" Nov 24 12:15:16 crc kubenswrapper[4782]: I1124 12:15:16.284853 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.662719 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.767965 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.909709 4782 generic.go:334] "Generic (PLEG): container finished" podID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerID="5a16d7ae2626ec216e00a63076bd7ac1bd2e8be464407a67767d2413251d272b" exitCode=0 Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.909810 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerDied","Data":"5a16d7ae2626ec216e00a63076bd7ac1bd2e8be464407a67767d2413251d272b"} Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.916592 4782 generic.go:334] "Generic (PLEG): container finished" podID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerID="eac58ec35a684dc2a79e654a7ab4e803740c61b7e7006d7588160888952b0344" exitCode=0 Nov 24 12:15:17 crc kubenswrapper[4782]: I1124 12:15:17.916637 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerDied","Data":"eac58ec35a684dc2a79e654a7ab4e803740c61b7e7006d7588160888952b0344"} Nov 24 12:15:18 crc kubenswrapper[4782]: I1124 12:15:18.932958 4782 generic.go:334] "Generic (PLEG): container finished" podID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerID="6951aeeec7edb151fecd0156a9a76701e6509e4d3fe354c7be7d68f9407eb02a" exitCode=0 Nov 24 12:15:18 crc kubenswrapper[4782]: I1124 12:15:18.933182 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerDied","Data":"6951aeeec7edb151fecd0156a9a76701e6509e4d3fe354c7be7d68f9407eb02a"} Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.477276 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qs6zz"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.478773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.546959 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpn9\" (UniqueName: \"kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.547097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.578912 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qs6zz"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.649557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.649688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpn9\" (UniqueName: \"kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.650597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.662636 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mxg8x"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.663794 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.686955 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxg8x"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.698532 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpn9\" (UniqueName: \"kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9\") pod \"nova-api-db-create-qs6zz\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.701706 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d520-account-create-brb7r"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.704055 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.724296 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d520-account-create-brb7r"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.724945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.756342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.756438 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2fq\" (UniqueName: \"kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.756474 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.756504 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsrj\" (UniqueName: \"kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.860412 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.860578 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m2fq\" (UniqueName: \"kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.860609 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.860641 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrsrj\" (UniqueName: \"kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.861505 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.862176 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.893849 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrsrj\" (UniqueName: \"kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj\") pod \"nova-api-d520-account-create-brb7r\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.903487 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.906884 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m2fq\" (UniqueName: \"kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq\") pod \"nova-cell0-db-create-mxg8x\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.925046 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.947187 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gghr8"] Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.948499 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.965702 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.965790 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnc7\" (UniqueName: \"kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:19 crc kubenswrapper[4782]: I1124 12:15:19.993464 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gghr8"] Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.006194 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d20a-account-create-cdvbc"] Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.007767 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.011784 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.028500 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d20a-account-create-cdvbc"] Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.038043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8a90-account-create-b5fwb"] Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.039799 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.043145 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.061979 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.062478 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8a90-account-create-b5fwb"] Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.070596 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.070667 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgnc7\" (UniqueName: \"kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.071520 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.098108 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgnc7\" (UniqueName: \"kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7\") pod \"nova-cell1-db-create-gghr8\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.130433 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.172564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.172636 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skn8\" (UniqueName: \"kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.172709 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkhvw\" (UniqueName: \"kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.172744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.177955 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.237640 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.273689 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.273804 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8nd\" (UniqueName: \"kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.273861 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.273951 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274014 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274035 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274056 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle\") pod \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\" (UID: \"be96657f-b39b-4f41-8e3f-b364cf03d7d3\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274357 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skn8\" (UniqueName: \"kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.274481 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkhvw\" (UniqueName: \"kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.297082 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.298192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.299715 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.301480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.305247 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.333506 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd" (OuterVolumeSpecName: "kube-api-access-nr8nd") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "kube-api-access-nr8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.352470 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts" (OuterVolumeSpecName: "scripts") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377019 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377201 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377256 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377354 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377436 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b557w\" (UniqueName: \"kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.377466 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle\") pod \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\" (UID: \"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.378084 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8nd\" (UniqueName: \"kubernetes.io/projected/be96657f-b39b-4f41-8e3f-b364cf03d7d3-kube-api-access-nr8nd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.378104 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.378115 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be96657f-b39b-4f41-8e3f-b364cf03d7d3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.378128 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.387194 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.391063 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs" (OuterVolumeSpecName: "logs") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.397083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkhvw\" (UniqueName: \"kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw\") pod \"nova-cell0-d20a-account-create-cdvbc\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.456727 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skn8\" (UniqueName: \"kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8\") pod \"nova-cell1-8a90-account-create-b5fwb\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.480656 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts" (OuterVolumeSpecName: "scripts") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.480737 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.482715 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.482737 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.482745 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.482754 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.499944 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w" (OuterVolumeSpecName: "kube-api-access-b557w") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "kube-api-access-b557w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.592252 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b557w\" (UniqueName: \"kubernetes.io/projected/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-kube-api-access-b557w\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.616520 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.648990 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.699253 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.721602 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.763531 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.801542 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.814657 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.877522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data" (OuterVolumeSpecName: "config-data") pod "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" (UID: "0d76cdd2-b661-4fe7-8420-5bb74d69e1c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904148 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904268 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904312 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904344 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904411 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58gc\" (UniqueName: \"kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904514 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs\") pod \"97916bf7-05b5-442a-b908-3f0e20f4badb\" (UID: \"97916bf7-05b5-442a-b908-3f0e20f4badb\") " Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.904855 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.905233 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs" (OuterVolumeSpecName: "logs") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.911762 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.927501 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts" (OuterVolumeSpecName: "scripts") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.932582 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc" (OuterVolumeSpecName: "kube-api-access-v58gc") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "kube-api-access-v58gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.942507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:15:20 crc kubenswrapper[4782]: I1124 12:15:20.995593 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data" (OuterVolumeSpecName: "config-data") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007665 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007707 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007721 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007748 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007761 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58gc\" (UniqueName: \"kubernetes.io/projected/97916bf7-05b5-442a-b908-3f0e20f4badb-kube-api-access-v58gc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.007773 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97916bf7-05b5-442a-b908-3f0e20f4badb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.081958 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.087111 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.087258 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0d76cdd2-b661-4fe7-8420-5bb74d69e1c0","Type":"ContainerDied","Data":"92a7fe39b930369cfaa7d581684401bd72117804a34bfdabea4dbd0f7a19692e"} Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.087690 4782 scope.go:117] "RemoveContainer" containerID="1895afb98fec8c2d968a8e68fb5488f753695b2d0f201a00927dd7333d85081e" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.095519 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.110265 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.112722 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-556bd89d59-52m2m" event={"ID":"a2fa4f6f-fc43-4b5c-af94-0534b54364d7","Type":"ContainerStarted","Data":"9a0cbddbb4803ca9003d1cd02846464072ddcbd68d9b18efd1956d8de6a0ec3c"} Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.113697 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.113718 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.117327 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.144338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97916bf7-05b5-442a-b908-3f0e20f4badb","Type":"ContainerDied","Data":"6904483e761cbf69b277f2c03d084ab998c35a5a6b9b05c3a50715628b2bd298"} Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.144441 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.156290 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.169560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data" (OuterVolumeSpecName: "config-data") pod "be96657f-b39b-4f41-8e3f-b364cf03d7d3" (UID: "be96657f-b39b-4f41-8e3f-b364cf03d7d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.178501 4782 scope.go:117] "RemoveContainer" containerID="bf75227df0b2637759e4087e89e420bd7ec1e177ea0255cd970a72b0293dacf0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.178532 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be96657f-b39b-4f41-8e3f-b364cf03d7d3","Type":"ContainerDied","Data":"77917b698f503312b555fa7b54fa3659682b66660aaf564144edf872240ce4aa"} Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.178634 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.185424 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212706 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212789 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212825 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212886 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212928 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnt8\" (UniqueName: \"kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212965 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.212987 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.213025 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs\") pod \"51c9b08b-6a0a-45a6-904c-9964952a7b23\" (UID: \"51c9b08b-6a0a-45a6-904c-9964952a7b23\") " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.213821 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.213844 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.213854 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be96657f-b39b-4f41-8e3f-b364cf03d7d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.215834 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97916bf7-05b5-442a-b908-3f0e20f4badb" (UID: "97916bf7-05b5-442a-b908-3f0e20f4badb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.216544 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs" (OuterVolumeSpecName: "logs") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.219741 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.272732 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts" (OuterVolumeSpecName: "scripts") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.273851 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-556bd89d59-52m2m" podStartSLOduration=12.273833117 podStartE2EDuration="12.273833117s" podCreationTimestamp="2025-11-24 12:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:21.167786223 +0000 UTC m=+1170.411620002" watchObservedRunningTime="2025-11-24 12:15:21.273833117 +0000 UTC m=+1170.517666886" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.317530 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8" (OuterVolumeSpecName: "kube-api-access-fhnt8") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "kube-api-access-fhnt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.333985 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97916bf7-05b5-442a-b908-3f0e20f4badb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.334556 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.334672 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnt8\" (UniqueName: \"kubernetes.io/projected/51c9b08b-6a0a-45a6-904c-9964952a7b23-kube-api-access-fhnt8\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.334821 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.334921 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9b08b-6a0a-45a6-904c-9964952a7b23-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.341190 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.348166 4782 scope.go:117] "RemoveContainer" containerID="6951aeeec7edb151fecd0156a9a76701e6509e4d3fe354c7be7d68f9407eb02a" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.398972 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.473167 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.474178 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.541597 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.572676 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.583093 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" path="/var/lib/kubelet/pods/0d76cdd2-b661-4fe7-8420-5bb74d69e1c0/volumes" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.600896 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.600933 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.613824 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.615314 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data" (OuterVolumeSpecName: "config-data") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.615822 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.615984 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.616126 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="proxy-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.616228 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="proxy-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.616340 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.616726 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.616837 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="sg-core" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.616930 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="sg-core" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.617026 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.617139 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api-log" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.617474 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.617568 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.617673 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.617762 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.617870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.617961 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.618105 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-notification-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.618202 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-notification-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: E1124 12:15:21.618325 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-central-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.618431 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-central-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.618879 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-central-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632354 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="sg-core" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632463 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632578 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632664 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632743 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="ceilometer-notification-agent" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632813 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632884 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" containerName="glance-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.632951 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" containerName="proxy-httpd" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.633027 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d76cdd2-b661-4fe7-8420-5bb74d69e1c0" containerName="cinder-api-log" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.645335 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.645506 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d520-account-create-brb7r"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.645615 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qs6zz"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.645809 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.647445 4782 scope.go:117] "RemoveContainer" containerID="3d852f0ace2cb409980731475d75e86bd6e31fbde626b0acec9e1502d54ed6f4" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.652489 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.652653 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.652986 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.660450 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.689759 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.707580 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.707912 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.709404 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.712865 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.719427 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.726960 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "51c9b08b-6a0a-45a6-904c-9964952a7b23" (UID: "51c9b08b-6a0a-45a6-904c-9964952a7b23"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.749245 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.784362 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809654 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25280233-1f0e-44f9-80ce-48d3d2413861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809765 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809782 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809831 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsng\" (UniqueName: \"kubernetes.io/projected/533bc3bf-a4ed-4133-b448-9888eeea6416-kube-api-access-vjsng\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.809979 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4757\" (UniqueName: \"kubernetes.io/projected/25280233-1f0e-44f9-80ce-48d3d2413861-kube-api-access-r4757\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810082 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810150 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data-custom\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810232 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-scripts\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810506 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810648 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25280233-1f0e-44f9-80ce-48d3d2413861-logs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.810857 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-logs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.811059 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c9b08b-6a0a-45a6-904c-9964952a7b23-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.829042 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.834469 4782 scope.go:117] "RemoveContainer" containerID="ed407cd4ecbed8c12cb5475a7baad02beee17963944921dce1c10e088f1e294f" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.866012 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.876588 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.879723 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.879913 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.911641 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.913439 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.913572 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25280233-1f0e-44f9-80ce-48d3d2413861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.913688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.931880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.931967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.931996 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsng\" (UniqueName: \"kubernetes.io/projected/533bc3bf-a4ed-4133-b448-9888eeea6416-kube-api-access-vjsng\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932016 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4757\" (UniqueName: \"kubernetes.io/projected/25280233-1f0e-44f9-80ce-48d3d2413861-kube-api-access-r4757\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932106 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data-custom\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932170 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-scripts\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25280233-1f0e-44f9-80ce-48d3d2413861-logs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932355 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932401 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-logs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.932421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.935054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.935118 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.917004 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25280233-1f0e-44f9-80ce-48d3d2413861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.947571 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.950167 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533bc3bf-a4ed-4133-b448-9888eeea6416-logs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.951647 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25280233-1f0e-44f9-80ce-48d3d2413861-logs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.956318 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.956851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.958449 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxg8x"] Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.965626 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.973289 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data-custom\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.975486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-scripts\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.981086 4782 scope.go:117] "RemoveContainer" containerID="059f9b11b21838144136d7d458556169521d3cbec5765b4f6cb35ea25b1b9dc1" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.982384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4757\" (UniqueName: \"kubernetes.io/projected/25280233-1f0e-44f9-80ce-48d3d2413861-kube-api-access-r4757\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.984004 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.989744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsng\" (UniqueName: \"kubernetes.io/projected/533bc3bf-a4ed-4133-b448-9888eeea6416-kube-api-access-vjsng\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:21 crc kubenswrapper[4782]: I1124 12:15:21.989751 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.009785 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.022250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25280233-1f0e-44f9-80ce-48d3d2413861-config-data\") pod \"cinder-api-0\" (UID: \"25280233-1f0e-44f9-80ce-48d3d2413861\") " pod="openstack/cinder-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037712 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037765 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt78m\" (UniqueName: \"kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037816 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037859 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037910 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037949 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.037970 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.044160 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533bc3bf-a4ed-4133-b448-9888eeea6416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.059549 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.071750 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gghr8"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.079556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"533bc3bf-a4ed-4133-b448-9888eeea6416\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.103989 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d20a-account-create-cdvbc"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.143011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.143363 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.143445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.143980 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.144045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt78m\" (UniqueName: \"kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.144139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.144234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.144332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.145054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.148195 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.148670 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.149055 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.149407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.155834 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.178483 4782 scope.go:117] "RemoveContainer" containerID="eac58ec35a684dc2a79e654a7ab4e803740c61b7e7006d7588160888952b0344" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.204683 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8a90-account-create-b5fwb"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.216326 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt78m\" (UniqueName: \"kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m\") pod \"ceilometer-0\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.235928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.255841 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gghr8" event={"ID":"f452b668-2d6c-4b6c-b0d7-053a7b908a24","Type":"ContainerStarted","Data":"113a694b026ceba9c367f2625f217a98418aa619261f411377185372bea0c1cc"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.257725 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c7c7aa63-55ae-4525-a262-c5c9d08e4fe7","Type":"ContainerStarted","Data":"4005b70ab74795703689d2af666ce127a64786cfac87d438745c5174b2544763"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.272966 4782 scope.go:117] "RemoveContainer" containerID="8265cec06497a9db9812e78f634b8ed1257ef9ed4e28b2318e56dd87ac6ef87d" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.302712 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.655861025 podStartE2EDuration="22.302689875s" podCreationTimestamp="2025-11-24 12:15:00 +0000 UTC" firstStartedPulling="2025-11-24 12:15:02.21381818 +0000 UTC m=+1151.457651949" lastFinishedPulling="2025-11-24 12:15:19.86064703 +0000 UTC m=+1169.104480799" observedRunningTime="2025-11-24 12:15:22.296797486 +0000 UTC m=+1171.540631275" watchObservedRunningTime="2025-11-24 12:15:22.302689875 +0000 UTC m=+1171.546523644" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.303548 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51c9b08b-6a0a-45a6-904c-9964952a7b23","Type":"ContainerDied","Data":"63274c5677cd90ae657301c023a0f978a282c0e192bd3d260b0cca52a21de395"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.303741 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.383882 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qs6zz" event={"ID":"82e0a450-529e-4fed-a95f-c7a37b086f3b","Type":"ContainerStarted","Data":"b5bd8c4e6ca065b1bdfbbf3bb0a7280536730352599aec9313069eca5b97fb1c"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.384253 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qs6zz" event={"ID":"82e0a450-529e-4fed-a95f-c7a37b086f3b","Type":"ContainerStarted","Data":"09268b86ed8e550a9f686bbcb698cb9421a2c5916f046255c36ec09cdd1bc2dc"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.420195 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.444735 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.444807 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.457461 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.457593 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.460449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.460715 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.469041 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qs6zz" podStartSLOduration=3.469019024 podStartE2EDuration="3.469019024s" podCreationTimestamp="2025-11-24 12:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:22.406658184 +0000 UTC m=+1171.650491953" watchObservedRunningTime="2025-11-24 12:15:22.469019024 +0000 UTC m=+1171.712852793" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.474799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d520-account-create-brb7r" event={"ID":"55557c05-cdcc-4fff-8ca2-6e616c2a1854","Type":"ContainerStarted","Data":"5c9bfbcde1deaaac144fb1b174965c004833fa8d28297b6f644c32878a823842"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.474845 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d520-account-create-brb7r" event={"ID":"55557c05-cdcc-4fff-8ca2-6e616c2a1854","Type":"ContainerStarted","Data":"ff9023201eaaeed435d5a15978a601e71a36bc1c887bffa2d5c37af29ca98f11"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.509754 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxg8x" event={"ID":"f9a6efa0-83b9-4dd4-b55b-d262cb88f536","Type":"ContainerStarted","Data":"fe75ae5f86a4e088e5f591ffa74f135ccbd317ec1eee355b4b86dc2df72a1a07"} Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.533023 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d520-account-create-brb7r" podStartSLOduration=3.5329909280000003 podStartE2EDuration="3.532990928s" podCreationTimestamp="2025-11-24 12:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:22.530268354 +0000 UTC m=+1171.774102123" watchObservedRunningTime="2025-11-24 12:15:22.532990928 +0000 UTC m=+1171.776824697" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.536256 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561276 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9stj\" (UniqueName: \"kubernetes.io/projected/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-kube-api-access-d9stj\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561669 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-config-data\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561692 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561813 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561917 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-logs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.561945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-scripts\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.618680 4782 scope.go:117] "RemoveContainer" containerID="5a16d7ae2626ec216e00a63076bd7ac1bd2e8be464407a67767d2413251d272b" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664439 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-config-data\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664541 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664570 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664648 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-logs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-scripts\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664772 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9stj\" (UniqueName: \"kubernetes.io/projected/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-kube-api-access-d9stj\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.664826 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.668667 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.670743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-logs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.672704 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.678008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-scripts\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.678992 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.685181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.687156 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-config-data\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.724099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9stj\" (UniqueName: \"kubernetes.io/projected/c5c3127c-bed5-4d35-b535-fc6ca3f79e86-kube-api-access-d9stj\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.731987 4782 scope.go:117] "RemoveContainer" containerID="39d9858c06caf43bf4b860c62d64ae98ad1b60241013e4b16e462968e64000dc" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.743354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c5c3127c-bed5-4d35-b535-fc6ca3f79e86\") " pod="openstack/glance-default-external-api-0" Nov 24 12:15:22 crc kubenswrapper[4782]: I1124 12:15:22.816960 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.199046 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:15:23 crc kubenswrapper[4782]: W1124 12:15:23.235342 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533bc3bf_a4ed_4133_b448_9888eeea6416.slice/crio-e7e1aa22891c26dc3ca0e8e7883b504e158f767704ec6bdd1bc3ec17ef5fd471 WatchSource:0}: Error finding container e7e1aa22891c26dc3ca0e8e7883b504e158f767704ec6bdd1bc3ec17ef5fd471: Status 404 returned error can't find the container with id e7e1aa22891c26dc3ca0e8e7883b504e158f767704ec6bdd1bc3ec17ef5fd471 Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.408586 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:15:23 crc kubenswrapper[4782]: W1124 12:15:23.441914 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25280233_1f0e_44f9_80ce_48d3d2413861.slice/crio-de78ebd9f7b5f9d758bf29e8c4a36800257911df806a062be2c2bab0ae4f22e8 WatchSource:0}: Error finding container de78ebd9f7b5f9d758bf29e8c4a36800257911df806a062be2c2bab0ae4f22e8: Status 404 returned error can't find the container with id de78ebd9f7b5f9d758bf29e8c4a36800257911df806a062be2c2bab0ae4f22e8 Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.503209 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c9b08b-6a0a-45a6-904c-9964952a7b23" path="/var/lib/kubelet/pods/51c9b08b-6a0a-45a6-904c-9964952a7b23/volumes" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.504323 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97916bf7-05b5-442a-b908-3f0e20f4badb" path="/var/lib/kubelet/pods/97916bf7-05b5-442a-b908-3f0e20f4badb/volumes" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.505053 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be96657f-b39b-4f41-8e3f-b364cf03d7d3" path="/var/lib/kubelet/pods/be96657f-b39b-4f41-8e3f-b364cf03d7d3/volumes" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.568152 4782 generic.go:334] "Generic (PLEG): container finished" podID="f9a6efa0-83b9-4dd4-b55b-d262cb88f536" containerID="15a310fadf4c98142fe767e8b6ea2b5050ab4515992fb4c4651d477d42ced434" exitCode=0 Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.568216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxg8x" event={"ID":"f9a6efa0-83b9-4dd4-b55b-d262cb88f536","Type":"ContainerDied","Data":"15a310fadf4c98142fe767e8b6ea2b5050ab4515992fb4c4651d477d42ced434"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.570117 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25280233-1f0e-44f9-80ce-48d3d2413861","Type":"ContainerStarted","Data":"de78ebd9f7b5f9d758bf29e8c4a36800257911df806a062be2c2bab0ae4f22e8"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.572424 4782 generic.go:334] "Generic (PLEG): container finished" podID="55557c05-cdcc-4fff-8ca2-6e616c2a1854" containerID="5c9bfbcde1deaaac144fb1b174965c004833fa8d28297b6f644c32878a823842" exitCode=0 Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.572472 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d520-account-create-brb7r" event={"ID":"55557c05-cdcc-4fff-8ca2-6e616c2a1854","Type":"ContainerDied","Data":"5c9bfbcde1deaaac144fb1b174965c004833fa8d28297b6f644c32878a823842"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.587639 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8a90-account-create-b5fwb" event={"ID":"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a","Type":"ContainerStarted","Data":"de42460febf9909cfebdf738f31dc845975cb879933d4d422b29dd63b1d5d138"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.603103 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d20a-account-create-cdvbc" event={"ID":"ff281fed-4697-4e59-9d08-260214164d8e","Type":"ContainerStarted","Data":"71594828c8cb799014de681d6c625eb7c9d7221cb107e6ff4b861f15beae6463"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.603146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d20a-account-create-cdvbc" event={"ID":"ff281fed-4697-4e59-9d08-260214164d8e","Type":"ContainerStarted","Data":"f18d2231e46156bffee4fa6829a8356cf909f166c9a6fdcc4d596b5eaabc13ea"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.611477 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.617252 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"533bc3bf-a4ed-4133-b448-9888eeea6416","Type":"ContainerStarted","Data":"e7e1aa22891c26dc3ca0e8e7883b504e158f767704ec6bdd1bc3ec17ef5fd471"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.652812 4782 generic.go:334] "Generic (PLEG): container finished" podID="82e0a450-529e-4fed-a95f-c7a37b086f3b" containerID="b5bd8c4e6ca065b1bdfbbf3bb0a7280536730352599aec9313069eca5b97fb1c" exitCode=0 Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.653587 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qs6zz" event={"ID":"82e0a450-529e-4fed-a95f-c7a37b086f3b","Type":"ContainerDied","Data":"b5bd8c4e6ca065b1bdfbbf3bb0a7280536730352599aec9313069eca5b97fb1c"} Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.661083 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.681929 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-d20a-account-create-cdvbc" podStartSLOduration=4.681912062 podStartE2EDuration="4.681912062s" podCreationTimestamp="2025-11-24 12:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:23.67594984 +0000 UTC m=+1172.919783609" watchObservedRunningTime="2025-11-24 12:15:23.681912062 +0000 UTC m=+1172.925745831" Nov 24 12:15:23 crc kubenswrapper[4782]: I1124 12:15:23.818673 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.673170 4782 generic.go:334] "Generic (PLEG): container finished" podID="ff281fed-4697-4e59-9d08-260214164d8e" containerID="71594828c8cb799014de681d6c625eb7c9d7221cb107e6ff4b861f15beae6463" exitCode=0 Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.673895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d20a-account-create-cdvbc" event={"ID":"ff281fed-4697-4e59-9d08-260214164d8e","Type":"ContainerDied","Data":"71594828c8cb799014de681d6c625eb7c9d7221cb107e6ff4b861f15beae6463"} Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.682828 4782 generic.go:334] "Generic (PLEG): container finished" podID="f452b668-2d6c-4b6c-b0d7-053a7b908a24" containerID="95b3701023e2a219c933fc13bc42f7408e5233e3906599fc8a52dd25260dc333" exitCode=0 Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.683197 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gghr8" event={"ID":"f452b668-2d6c-4b6c-b0d7-053a7b908a24","Type":"ContainerDied","Data":"95b3701023e2a219c933fc13bc42f7408e5233e3906599fc8a52dd25260dc333"} Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.688529 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c5c3127c-bed5-4d35-b535-fc6ca3f79e86","Type":"ContainerStarted","Data":"1fe1e7557f041ec33c9df993a8677afa5880b15303487f647dabc155d9f8638a"} Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.702059 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerStarted","Data":"fb7ceb725766678fd4e2983fa3e9e483b75933f3029e26aacd3b51414f2e6638"} Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.711102 4782 generic.go:334] "Generic (PLEG): container finished" podID="0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" containerID="d7c023fed0f961a554b1c0e012ef372ee467e23ee20cc22f28116ac12eb595ca" exitCode=0 Nov 24 12:15:24 crc kubenswrapper[4782]: I1124 12:15:24.711329 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8a90-account-create-b5fwb" event={"ID":"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a","Type":"ContainerDied","Data":"d7c023fed0f961a554b1c0e012ef372ee467e23ee20cc22f28116ac12eb595ca"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.005189 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.008617 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-556bd89d59-52m2m" podUID="a2fa4f6f-fc43-4b5c-af94-0534b54364d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.161294 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.246390 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts\") pod \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.246637 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrsrj\" (UniqueName: \"kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj\") pod \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\" (UID: \"55557c05-cdcc-4fff-8ca2-6e616c2a1854\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.248522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55557c05-cdcc-4fff-8ca2-6e616c2a1854" (UID: "55557c05-cdcc-4fff-8ca2-6e616c2a1854"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.271208 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj" (OuterVolumeSpecName: "kube-api-access-wrsrj") pod "55557c05-cdcc-4fff-8ca2-6e616c2a1854" (UID: "55557c05-cdcc-4fff-8ca2-6e616c2a1854"). InnerVolumeSpecName "kube-api-access-wrsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.335410 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-665949bbb5-7lm9x" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.348933 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrsrj\" (UniqueName: \"kubernetes.io/projected/55557c05-cdcc-4fff-8ca2-6e616c2a1854-kube-api-access-wrsrj\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.348973 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55557c05-cdcc-4fff-8ca2-6e616c2a1854-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.462805 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.463058 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67ff6979bd-w7crx" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-api" containerID="cri-o://2cbc8f36f42299eecf78fef8eec96e38465157432d98d5765a72a643d8411bf1" gracePeriod=30 Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.463549 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67ff6979bd-w7crx" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-httpd" containerID="cri-o://4dace6148af2078893cc1dd8b0c7708dcbe77672ffa13edd22156610c3163909" gracePeriod=30 Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.728795 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.746047 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.770084 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m2fq\" (UniqueName: \"kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq\") pod \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.770196 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts\") pod \"82e0a450-529e-4fed-a95f-c7a37b086f3b\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.770237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfpn9\" (UniqueName: \"kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9\") pod \"82e0a450-529e-4fed-a95f-c7a37b086f3b\" (UID: \"82e0a450-529e-4fed-a95f-c7a37b086f3b\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.770338 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts\") pod \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\" (UID: \"f9a6efa0-83b9-4dd4-b55b-d262cb88f536\") " Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.773130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82e0a450-529e-4fed-a95f-c7a37b086f3b" (UID: "82e0a450-529e-4fed-a95f-c7a37b086f3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.779479 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9a6efa0-83b9-4dd4-b55b-d262cb88f536" (UID: "f9a6efa0-83b9-4dd4-b55b-d262cb88f536"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.788267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9" (OuterVolumeSpecName: "kube-api-access-kfpn9") pod "82e0a450-529e-4fed-a95f-c7a37b086f3b" (UID: "82e0a450-529e-4fed-a95f-c7a37b086f3b"). InnerVolumeSpecName "kube-api-access-kfpn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.799466 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d520-account-create-brb7r" event={"ID":"55557c05-cdcc-4fff-8ca2-6e616c2a1854","Type":"ContainerDied","Data":"ff9023201eaaeed435d5a15978a601e71a36bc1c887bffa2d5c37af29ca98f11"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.799509 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9023201eaaeed435d5a15978a601e71a36bc1c887bffa2d5c37af29ca98f11" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.799520 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d520-account-create-brb7r" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.822660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxg8x" event={"ID":"f9a6efa0-83b9-4dd4-b55b-d262cb88f536","Type":"ContainerDied","Data":"fe75ae5f86a4e088e5f591ffa74f135ccbd317ec1eee355b4b86dc2df72a1a07"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.822696 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe75ae5f86a4e088e5f591ffa74f135ccbd317ec1eee355b4b86dc2df72a1a07" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.822759 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxg8x" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.837319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c5c3127c-bed5-4d35-b535-fc6ca3f79e86","Type":"ContainerStarted","Data":"f0074453d37798f16331c32d09d098a54080254d65ed3f53a413844da2f4d98b"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.841530 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq" (OuterVolumeSpecName: "kube-api-access-8m2fq") pod "f9a6efa0-83b9-4dd4-b55b-d262cb88f536" (UID: "f9a6efa0-83b9-4dd4-b55b-d262cb88f536"). InnerVolumeSpecName "kube-api-access-8m2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.854025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25280233-1f0e-44f9-80ce-48d3d2413861","Type":"ContainerStarted","Data":"9b27dec0bcc304e6cd9e32e17a6200382ba079da6213904163796af1656812be"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.860036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerStarted","Data":"73f630efa946af5f72f9303f9cea6da3756cbed8b234c433f58234484b06d9e0"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.877596 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m2fq\" (UniqueName: \"kubernetes.io/projected/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-kube-api-access-8m2fq\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.877628 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e0a450-529e-4fed-a95f-c7a37b086f3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.877639 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfpn9\" (UniqueName: \"kubernetes.io/projected/82e0a450-529e-4fed-a95f-c7a37b086f3b-kube-api-access-kfpn9\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.877652 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a6efa0-83b9-4dd4-b55b-d262cb88f536-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.892690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"533bc3bf-a4ed-4133-b448-9888eeea6416","Type":"ContainerStarted","Data":"550705bd3db00029d74b3e7e619af92ec263a5bdcb265806fdac9283b590e9c9"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.935212 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qs6zz" Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.936791 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qs6zz" event={"ID":"82e0a450-529e-4fed-a95f-c7a37b086f3b","Type":"ContainerDied","Data":"09268b86ed8e550a9f686bbcb698cb9421a2c5916f046255c36ec09cdd1bc2dc"} Nov 24 12:15:25 crc kubenswrapper[4782]: I1124 12:15:25.936823 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09268b86ed8e550a9f686bbcb698cb9421a2c5916f046255c36ec09cdd1bc2dc" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.649044 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.708687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts\") pod \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.708769 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgnc7\" (UniqueName: \"kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7\") pod \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\" (UID: \"f452b668-2d6c-4b6c-b0d7-053a7b908a24\") " Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.712132 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f452b668-2d6c-4b6c-b0d7-053a7b908a24" (UID: "f452b668-2d6c-4b6c-b0d7-053a7b908a24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.728622 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7" (OuterVolumeSpecName: "kube-api-access-mgnc7") pod "f452b668-2d6c-4b6c-b0d7-053a7b908a24" (UID: "f452b668-2d6c-4b6c-b0d7-053a7b908a24"). InnerVolumeSpecName "kube-api-access-mgnc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.810707 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f452b668-2d6c-4b6c-b0d7-053a7b908a24-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.810739 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgnc7\" (UniqueName: \"kubernetes.io/projected/f452b668-2d6c-4b6c-b0d7-053a7b908a24-kube-api-access-mgnc7\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.876310 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.930802 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.962708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gghr8" event={"ID":"f452b668-2d6c-4b6c-b0d7-053a7b908a24","Type":"ContainerDied","Data":"113a694b026ceba9c367f2625f217a98418aa619261f411377185372bea0c1cc"} Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.962745 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113a694b026ceba9c367f2625f217a98418aa619261f411377185372bea0c1cc" Nov 24 12:15:26 crc kubenswrapper[4782]: I1124 12:15:26.962799 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gghr8" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.003592 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerStarted","Data":"fac614b13e304e2d9b0a10ac032d7326e8349622367d57132e872bf32b650f09"} Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.019637 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skn8\" (UniqueName: \"kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8\") pod \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.019688 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts\") pod \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\" (UID: \"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a\") " Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.019744 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts\") pod \"ff281fed-4697-4e59-9d08-260214164d8e\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.019763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkhvw\" (UniqueName: \"kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw\") pod \"ff281fed-4697-4e59-9d08-260214164d8e\" (UID: \"ff281fed-4697-4e59-9d08-260214164d8e\") " Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.021777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" (UID: "0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.022557 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff281fed-4697-4e59-9d08-260214164d8e" (UID: "ff281fed-4697-4e59-9d08-260214164d8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.028592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw" (OuterVolumeSpecName: "kube-api-access-jkhvw") pod "ff281fed-4697-4e59-9d08-260214164d8e" (UID: "ff281fed-4697-4e59-9d08-260214164d8e"). InnerVolumeSpecName "kube-api-access-jkhvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.032290 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"533bc3bf-a4ed-4133-b448-9888eeea6416","Type":"ContainerStarted","Data":"70c383847fdb5c1a50ce0c1f493087d8963c5facefafbfee5fb72a7127f03d88"} Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.034578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8" (OuterVolumeSpecName: "kube-api-access-4skn8") pod "0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" (UID: "0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a"). InnerVolumeSpecName "kube-api-access-4skn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.040676 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8a90-account-create-b5fwb" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.040676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8a90-account-create-b5fwb" event={"ID":"0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a","Type":"ContainerDied","Data":"de42460febf9909cfebdf738f31dc845975cb879933d4d422b29dd63b1d5d138"} Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.042795 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de42460febf9909cfebdf738f31dc845975cb879933d4d422b29dd63b1d5d138" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.058485 4782 generic.go:334] "Generic (PLEG): container finished" podID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerID="4dace6148af2078893cc1dd8b0c7708dcbe77672ffa13edd22156610c3163909" exitCode=0 Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.058619 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerDied","Data":"4dace6148af2078893cc1dd8b0c7708dcbe77672ffa13edd22156610c3163909"} Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.060690 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.060680148 podStartE2EDuration="6.060680148s" podCreationTimestamp="2025-11-24 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:27.056031222 +0000 UTC m=+1176.299865001" watchObservedRunningTime="2025-11-24 12:15:27.060680148 +0000 UTC m=+1176.304513917" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.095785 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d20a-account-create-cdvbc" event={"ID":"ff281fed-4697-4e59-9d08-260214164d8e","Type":"ContainerDied","Data":"f18d2231e46156bffee4fa6829a8356cf909f166c9a6fdcc4d596b5eaabc13ea"} Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.095832 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18d2231e46156bffee4fa6829a8356cf909f166c9a6fdcc4d596b5eaabc13ea" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.095935 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d20a-account-create-cdvbc" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.131139 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff281fed-4697-4e59-9d08-260214164d8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.131166 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkhvw\" (UniqueName: \"kubernetes.io/projected/ff281fed-4697-4e59-9d08-260214164d8e-kube-api-access-jkhvw\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.131178 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skn8\" (UniqueName: \"kubernetes.io/projected/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-kube-api-access-4skn8\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.131192 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.662287 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.767470 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:15:27 crc kubenswrapper[4782]: I1124 12:15:27.812230 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.105813 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"25280233-1f0e-44f9-80ce-48d3d2413861","Type":"ContainerStarted","Data":"ebf78cd9f856cc4fc60542b160f8ac7772c743cb2bb1b1a07616bfcf400ca0da"} Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.106229 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.108939 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerStarted","Data":"c5c5577f87c6d2a3d33d523042b13a4df270d5dc1fee94d832e509c38e01fc10"} Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.111658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c5c3127c-bed5-4d35-b535-fc6ca3f79e86","Type":"ContainerStarted","Data":"32b5e9bb01d913bb5218de064c0cbc495356417b88825c201ae771600950b50f"} Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.130077 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.130059095 podStartE2EDuration="7.130059095s" podCreationTimestamp="2025-11-24 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:28.124999408 +0000 UTC m=+1177.368833187" watchObservedRunningTime="2025-11-24 12:15:28.130059095 +0000 UTC m=+1177.373892864" Nov 24 12:15:28 crc kubenswrapper[4782]: I1124 12:15:28.155937 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.155916186 podStartE2EDuration="6.155916186s" podCreationTimestamp="2025-11-24 12:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:28.147694753 +0000 UTC m=+1177.391528532" watchObservedRunningTime="2025-11-24 12:15:28.155916186 +0000 UTC m=+1177.399749955" Nov 24 12:15:29 crc kubenswrapper[4782]: I1124 12:15:29.990439 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:29 crc kubenswrapper[4782]: I1124 12:15:29.992594 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-556bd89d59-52m2m" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.140573 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-central-agent" containerID="cri-o://73f630efa946af5f72f9303f9cea6da3756cbed8b234c433f58234484b06d9e0" gracePeriod=30 Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.140977 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-notification-agent" containerID="cri-o://fac614b13e304e2d9b0a10ac032d7326e8349622367d57132e872bf32b650f09" gracePeriod=30 Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.140986 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="sg-core" containerID="cri-o://c5c5577f87c6d2a3d33d523042b13a4df270d5dc1fee94d832e509c38e01fc10" gracePeriod=30 Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.141080 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="proxy-httpd" containerID="cri-o://e06796d0ec78c6d6a509995d06f3eaeae2629da07e8bafbb43ed5fa27aaeab08" gracePeriod=30 Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.140656 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerStarted","Data":"e06796d0ec78c6d6a509995d06f3eaeae2629da07e8bafbb43ed5fa27aaeab08"} Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.141151 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.192834 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.848295707 podStartE2EDuration="9.192815739s" podCreationTimestamp="2025-11-24 12:15:21 +0000 UTC" firstStartedPulling="2025-11-24 12:15:23.71653766 +0000 UTC m=+1172.960371429" lastFinishedPulling="2025-11-24 12:15:29.061057692 +0000 UTC m=+1178.304891461" observedRunningTime="2025-11-24 12:15:30.187053083 +0000 UTC m=+1179.430886862" watchObservedRunningTime="2025-11-24 12:15:30.192815739 +0000 UTC m=+1179.436649508" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258326 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h75bq"] Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258707 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452b668-2d6c-4b6c-b0d7-053a7b908a24" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258722 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452b668-2d6c-4b6c-b0d7-053a7b908a24" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258741 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55557c05-cdcc-4fff-8ca2-6e616c2a1854" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258747 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="55557c05-cdcc-4fff-8ca2-6e616c2a1854" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258759 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e0a450-529e-4fed-a95f-c7a37b086f3b" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258765 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e0a450-529e-4fed-a95f-c7a37b086f3b" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258776 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a6efa0-83b9-4dd4-b55b-d262cb88f536" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258781 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a6efa0-83b9-4dd4-b55b-d262cb88f536" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258807 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258813 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: E1124 12:15:30.258831 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff281fed-4697-4e59-9d08-260214164d8e" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.258838 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff281fed-4697-4e59-9d08-260214164d8e" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259015 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a6efa0-83b9-4dd4-b55b-d262cb88f536" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259045 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff281fed-4697-4e59-9d08-260214164d8e" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259058 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e0a450-529e-4fed-a95f-c7a37b086f3b" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259070 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259083 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452b668-2d6c-4b6c-b0d7-053a7b908a24" containerName="mariadb-database-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259093 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="55557c05-cdcc-4fff-8ca2-6e616c2a1854" containerName="mariadb-account-create" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.259834 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.265807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.266073 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.266578 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bcqdb" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.292825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.292945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7sc\" (UniqueName: \"kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.293000 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.293062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.312127 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h75bq"] Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.394736 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb7sc\" (UniqueName: \"kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.394821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.394903 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.394943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.408176 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.415077 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.418064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.418207 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb7sc\" (UniqueName: \"kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc\") pod \"nova-cell0-conductor-db-sync-h75bq\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:30 crc kubenswrapper[4782]: I1124 12:15:30.625352 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.167873 4782 generic.go:334] "Generic (PLEG): container finished" podID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerID="2cbc8f36f42299eecf78fef8eec96e38465157432d98d5765a72a643d8411bf1" exitCode=0 Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.168241 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerDied","Data":"2cbc8f36f42299eecf78fef8eec96e38465157432d98d5765a72a643d8411bf1"} Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.181442 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h75bq"] Nov 24 12:15:31 crc kubenswrapper[4782]: W1124 12:15:31.181679 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc358e6ea_3b34_4ec1_ba92_dd7437ccaf37.slice/crio-26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b WatchSource:0}: Error finding container 26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b: Status 404 returned error can't find the container with id 26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183149 4782 generic.go:334] "Generic (PLEG): container finished" podID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerID="e06796d0ec78c6d6a509995d06f3eaeae2629da07e8bafbb43ed5fa27aaeab08" exitCode=0 Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183183 4782 generic.go:334] "Generic (PLEG): container finished" podID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerID="c5c5577f87c6d2a3d33d523042b13a4df270d5dc1fee94d832e509c38e01fc10" exitCode=2 Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183194 4782 generic.go:334] "Generic (PLEG): container finished" podID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerID="fac614b13e304e2d9b0a10ac032d7326e8349622367d57132e872bf32b650f09" exitCode=0 Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerDied","Data":"e06796d0ec78c6d6a509995d06f3eaeae2629da07e8bafbb43ed5fa27aaeab08"} Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183238 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerDied","Data":"c5c5577f87c6d2a3d33d523042b13a4df270d5dc1fee94d832e509c38e01fc10"} Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.183254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerDied","Data":"fac614b13e304e2d9b0a10ac032d7326e8349622367d57132e872bf32b650f09"} Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.599208 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.624546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs\") pod \"31ab3120-bee9-41b1-b9cc-61b5a953945e\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.624605 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qz7d\" (UniqueName: \"kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d\") pod \"31ab3120-bee9-41b1-b9cc-61b5a953945e\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.624640 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config\") pod \"31ab3120-bee9-41b1-b9cc-61b5a953945e\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.624770 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config\") pod \"31ab3120-bee9-41b1-b9cc-61b5a953945e\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.624834 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle\") pod \"31ab3120-bee9-41b1-b9cc-61b5a953945e\" (UID: \"31ab3120-bee9-41b1-b9cc-61b5a953945e\") " Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.659637 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d" (OuterVolumeSpecName: "kube-api-access-2qz7d") pod "31ab3120-bee9-41b1-b9cc-61b5a953945e" (UID: "31ab3120-bee9-41b1-b9cc-61b5a953945e"). InnerVolumeSpecName "kube-api-access-2qz7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.661405 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "31ab3120-bee9-41b1-b9cc-61b5a953945e" (UID: "31ab3120-bee9-41b1-b9cc-61b5a953945e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.727136 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qz7d\" (UniqueName: \"kubernetes.io/projected/31ab3120-bee9-41b1-b9cc-61b5a953945e-kube-api-access-2qz7d\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.727169 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.764841 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31ab3120-bee9-41b1-b9cc-61b5a953945e" (UID: "31ab3120-bee9-41b1-b9cc-61b5a953945e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.772289 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config" (OuterVolumeSpecName: "config") pod "31ab3120-bee9-41b1-b9cc-61b5a953945e" (UID: "31ab3120-bee9-41b1-b9cc-61b5a953945e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.778564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "31ab3120-bee9-41b1-b9cc-61b5a953945e" (UID: "31ab3120-bee9-41b1-b9cc-61b5a953945e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.829656 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.829686 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:31 crc kubenswrapper[4782]: I1124 12:15:31.829700 4782 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ab3120-bee9-41b1-b9cc-61b5a953945e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.143762 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.144078 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.206622 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.216004 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67ff6979bd-w7crx" event={"ID":"31ab3120-bee9-41b1-b9cc-61b5a953945e","Type":"ContainerDied","Data":"cbe09d3b7a93b49959719ecacc963a888c20a8c4a9529009a18463606b02e48d"} Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.216041 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67ff6979bd-w7crx" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.216061 4782 scope.go:117] "RemoveContainer" containerID="4dace6148af2078893cc1dd8b0c7708dcbe77672ffa13edd22156610c3163909" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.232801 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h75bq" event={"ID":"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37","Type":"ContainerStarted","Data":"26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b"} Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.233364 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.282221 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.306301 4782 scope.go:117] "RemoveContainer" containerID="2cbc8f36f42299eecf78fef8eec96e38465157432d98d5765a72a643d8411bf1" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.384435 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.398231 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67ff6979bd-w7crx"] Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.818590 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.818782 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.868454 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:15:32 crc kubenswrapper[4782]: I1124 12:15:32.927443 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.245365 4782 generic.go:334] "Generic (PLEG): container finished" podID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerID="73f630efa946af5f72f9303f9cea6da3756cbed8b234c433f58234484b06d9e0" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.245455 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerDied","Data":"73f630efa946af5f72f9303f9cea6da3756cbed8b234c433f58234484b06d9e0"} Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.249748 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.249783 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.249798 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:15:33 crc kubenswrapper[4782]: I1124 12:15:33.503429 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" path="/var/lib/kubelet/pods/31ab3120-bee9-41b1-b9cc-61b5a953945e/volumes" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.003632 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174333 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt78m\" (UniqueName: \"kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174438 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174465 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174657 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174681 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.174724 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd\") pod \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\" (UID: \"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898\") " Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.175341 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.175440 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.192204 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts" (OuterVolumeSpecName: "scripts") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.206634 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m" (OuterVolumeSpecName: "kube-api-access-zt78m") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "kube-api-access-zt78m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.271689 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.277736 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.277768 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.277780 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.277794 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt78m\" (UniqueName: \"kubernetes.io/projected/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-kube-api-access-zt78m\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.277808 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.285876 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.287363 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.287622 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40cfcd8-b6c0-444b-b8c2-2770cc6cf898","Type":"ContainerDied","Data":"fb7ceb725766678fd4e2983fa3e9e483b75933f3029e26aacd3b51414f2e6638"} Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.287692 4782 scope.go:117] "RemoveContainer" containerID="e06796d0ec78c6d6a509995d06f3eaeae2629da07e8bafbb43ed5fa27aaeab08" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.298571 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.352658 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data" (OuterVolumeSpecName: "config-data") pod "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" (UID: "d40cfcd8-b6c0-444b-b8c2-2770cc6cf898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.381017 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.381047 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.412095 4782 scope.go:117] "RemoveContainer" containerID="c5c5577f87c6d2a3d33d523042b13a4df270d5dc1fee94d832e509c38e01fc10" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.446576 4782 scope.go:117] "RemoveContainer" containerID="fac614b13e304e2d9b0a10ac032d7326e8349622367d57132e872bf32b650f09" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.481413 4782 scope.go:117] "RemoveContainer" containerID="73f630efa946af5f72f9303f9cea6da3756cbed8b234c433f58234484b06d9e0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.639734 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.655443 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.717598 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.718207 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-api" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.718608 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-api" Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.718795 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-central-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.718871 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-central-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.718958 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="sg-core" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719016 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="sg-core" Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.719111 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-notification-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719194 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-notification-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.719260 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719328 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: E1124 12:15:34.719418 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="proxy-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719480 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="proxy-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719756 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="sg-core" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719844 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-notification-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.719923 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-api" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.720008 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="ceilometer-central-agent" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.720072 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ab3120-bee9-41b1-b9cc-61b5a953945e" containerName="neutron-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.720144 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" containerName="proxy-httpd" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.721810 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.726583 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.736410 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.736426 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788501 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788531 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788600 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.788654 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshwk\" (UniqueName: \"kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.890789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.890847 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.890886 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.890959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.891011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshwk\" (UniqueName: \"kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.891089 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.891109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.891679 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.891909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.895257 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.898648 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.915038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.919066 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:34 crc kubenswrapper[4782]: I1124 12:15:34.920337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshwk\" (UniqueName: \"kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk\") pod \"ceilometer-0\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " pod="openstack/ceilometer-0" Nov 24 12:15:35 crc kubenswrapper[4782]: I1124 12:15:35.025565 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 12:15:35 crc kubenswrapper[4782]: I1124 12:15:35.070662 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:15:35 crc kubenswrapper[4782]: I1124 12:15:35.506228 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40cfcd8-b6c0-444b-b8c2-2770cc6cf898" path="/var/lib/kubelet/pods/d40cfcd8-b6c0-444b-b8c2-2770cc6cf898/volumes" Nov 24 12:15:35 crc kubenswrapper[4782]: I1124 12:15:35.663355 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:36 crc kubenswrapper[4782]: I1124 12:15:36.314317 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerStarted","Data":"bcbb41a405ba3ec991bfb3c40028cfb83e20c9e679f62657bc8c8543e1be4a3e"} Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.131582 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.131684 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.276879 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.277293 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.282920 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.662005 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.662078 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.662850 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc"} pod="openstack/horizon-8684f6cd6d-mwlp6" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.662883 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" containerID="cri-o://fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc" gracePeriod=30 Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.765754 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.765823 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.766532 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"125ebd65dabcd15c20027fc4e84b06ecd573dd8fcdb0657e8af1622f5c0d2bbf"} pod="openstack/horizon-6574f9bb76-jkv6h" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.766570 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" containerID="cri-o://125ebd65dabcd15c20027fc4e84b06ecd573dd8fcdb0657e8af1622f5c0d2bbf" gracePeriod=30 Nov 24 12:15:37 crc kubenswrapper[4782]: I1124 12:15:37.995866 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:15:38 crc kubenswrapper[4782]: I1124 12:15:38.333559 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerStarted","Data":"61fea380e9fb55dc94bdecd83560d210c05cedc3695ce537e8590c6b587b5800"} Nov 24 12:15:42 crc kubenswrapper[4782]: I1124 12:15:42.076589 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="25280233-1f0e-44f9-80ce-48d3d2413861" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.177:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:42 crc kubenswrapper[4782]: I1124 12:15:42.710558 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="25280233-1f0e-44f9-80ce-48d3d2413861" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.177:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:15:44 crc kubenswrapper[4782]: I1124 12:15:44.130973 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:15:48 crc kubenswrapper[4782]: E1124 12:15:48.910260 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Nov 24 12:15:48 crc kubenswrapper[4782]: E1124 12:15:48.910935 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb7sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-h75bq_openstack(c358e6ea-3b34-4ec1-ba92-dd7437ccaf37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:15:48 crc kubenswrapper[4782]: E1124 12:15:48.912341 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-h75bq" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" Nov 24 12:15:49 crc kubenswrapper[4782]: I1124 12:15:49.475612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerStarted","Data":"b6b6ec85b5080a70e5e2ebd1e1fe214e3a7a764401b06ac2fa73bfcc340f34f2"} Nov 24 12:15:49 crc kubenswrapper[4782]: E1124 12:15:49.477553 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-h75bq" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" Nov 24 12:15:50 crc kubenswrapper[4782]: I1124 12:15:50.487894 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerStarted","Data":"80ef0058aacdff971a17282e82983e2387baa35cb4452c2e1c60ff4314865911"} Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511098 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerStarted","Data":"0a9a4db47b12afa9b75855164f40014d24b8181fc9cb564cb5e5c45ce2f3b98a"} Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511854 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511499 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="proxy-httpd" containerID="cri-o://0a9a4db47b12afa9b75855164f40014d24b8181fc9cb564cb5e5c45ce2f3b98a" gracePeriod=30 Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511249 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-central-agent" containerID="cri-o://61fea380e9fb55dc94bdecd83560d210c05cedc3695ce537e8590c6b587b5800" gracePeriod=30 Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511572 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="sg-core" containerID="cri-o://80ef0058aacdff971a17282e82983e2387baa35cb4452c2e1c60ff4314865911" gracePeriod=30 Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.511559 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-notification-agent" containerID="cri-o://b6b6ec85b5080a70e5e2ebd1e1fe214e3a7a764401b06ac2fa73bfcc340f34f2" gracePeriod=30 Nov 24 12:15:52 crc kubenswrapper[4782]: I1124 12:15:52.547778 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.523477513 podStartE2EDuration="18.547756494s" podCreationTimestamp="2025-11-24 12:15:34 +0000 UTC" firstStartedPulling="2025-11-24 12:15:35.676671559 +0000 UTC m=+1184.920505328" lastFinishedPulling="2025-11-24 12:15:51.70095054 +0000 UTC m=+1200.944784309" observedRunningTime="2025-11-24 12:15:52.537349472 +0000 UTC m=+1201.781183261" watchObservedRunningTime="2025-11-24 12:15:52.547756494 +0000 UTC m=+1201.791590263" Nov 24 12:15:53 crc kubenswrapper[4782]: I1124 12:15:53.521999 4782 generic.go:334] "Generic (PLEG): container finished" podID="ac84111a-e67e-4f05-8127-030bce907204" containerID="80ef0058aacdff971a17282e82983e2387baa35cb4452c2e1c60ff4314865911" exitCode=2 Nov 24 12:15:53 crc kubenswrapper[4782]: I1124 12:15:53.523224 4782 generic.go:334] "Generic (PLEG): container finished" podID="ac84111a-e67e-4f05-8127-030bce907204" containerID="b6b6ec85b5080a70e5e2ebd1e1fe214e3a7a764401b06ac2fa73bfcc340f34f2" exitCode=0 Nov 24 12:15:53 crc kubenswrapper[4782]: I1124 12:15:53.523330 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerDied","Data":"80ef0058aacdff971a17282e82983e2387baa35cb4452c2e1c60ff4314865911"} Nov 24 12:15:53 crc kubenswrapper[4782]: I1124 12:15:53.523462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerDied","Data":"b6b6ec85b5080a70e5e2ebd1e1fe214e3a7a764401b06ac2fa73bfcc340f34f2"} Nov 24 12:16:00 crc kubenswrapper[4782]: I1124 12:16:00.969630 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:16:01 crc kubenswrapper[4782]: I1124 12:16:01.590551 4782 generic.go:334] "Generic (PLEG): container finished" podID="ac84111a-e67e-4f05-8127-030bce907204" containerID="61fea380e9fb55dc94bdecd83560d210c05cedc3695ce537e8590c6b587b5800" exitCode=0 Nov 24 12:16:01 crc kubenswrapper[4782]: I1124 12:16:01.590833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerDied","Data":"61fea380e9fb55dc94bdecd83560d210c05cedc3695ce537e8590c6b587b5800"} Nov 24 12:16:02 crc kubenswrapper[4782]: I1124 12:16:02.600118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h75bq" event={"ID":"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37","Type":"ContainerStarted","Data":"0173d54fc95aaabb0d8b6fa2bd4e1c12fab4c5a3f66cbea010d1369c3b5edc6f"} Nov 24 12:16:02 crc kubenswrapper[4782]: I1124 12:16:02.646679 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-h75bq" podStartSLOduration=1.984038554 podStartE2EDuration="32.64665733s" podCreationTimestamp="2025-11-24 12:15:30 +0000 UTC" firstStartedPulling="2025-11-24 12:15:31.184727027 +0000 UTC m=+1180.428560796" lastFinishedPulling="2025-11-24 12:16:01.847345803 +0000 UTC m=+1211.091179572" observedRunningTime="2025-11-24 12:16:02.637708208 +0000 UTC m=+1211.881541977" watchObservedRunningTime="2025-11-24 12:16:02.64665733 +0000 UTC m=+1211.890491119" Nov 24 12:16:05 crc kubenswrapper[4782]: I1124 12:16:05.094291 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.660860 4782 generic.go:334] "Generic (PLEG): container finished" podID="b6cd757b-7259-4caf-b928-2dc936c99028" containerID="fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc" exitCode=137 Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.660923 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerDied","Data":"fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc"} Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.661931 4782 scope.go:117] "RemoveContainer" containerID="4936e6759b1bb688284ae4a7f5c6a07a624b02b19d698563d135b73499c945c8" Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.676573 4782 generic.go:334] "Generic (PLEG): container finished" podID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerID="125ebd65dabcd15c20027fc4e84b06ecd573dd8fcdb0657e8af1622f5c0d2bbf" exitCode=137 Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.676615 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerDied","Data":"125ebd65dabcd15c20027fc4e84b06ecd573dd8fcdb0657e8af1622f5c0d2bbf"} Nov 24 12:16:08 crc kubenswrapper[4782]: I1124 12:16:08.865541 4782 scope.go:117] "RemoveContainer" containerID="a882983edbff0b88582f5b543adfc3b5f1a92090d9d3705f639c8751eda3543a" Nov 24 12:16:09 crc kubenswrapper[4782]: I1124 12:16:09.688240 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerStarted","Data":"d98871cbaa1f6df1cde30a5a084019eaa955a9a85c74ff9b6bce5fc415979ee1"} Nov 24 12:16:09 crc kubenswrapper[4782]: I1124 12:16:09.691630 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6574f9bb76-jkv6h" event={"ID":"41a8247d-b0d2-4a46-b108-bc260db36e11","Type":"ContainerStarted","Data":"8a44cd4a3dafb8618650018f4cd40b53f8a104202acb637690001a3c54d9756b"} Nov 24 12:16:17 crc kubenswrapper[4782]: I1124 12:16:17.660344 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:16:17 crc kubenswrapper[4782]: I1124 12:16:17.660877 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:16:17 crc kubenswrapper[4782]: I1124 12:16:17.765306 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:16:17 crc kubenswrapper[4782]: I1124 12:16:17.765354 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:16:22 crc kubenswrapper[4782]: I1124 12:16:22.864290 4782 generic.go:334] "Generic (PLEG): container finished" podID="ac84111a-e67e-4f05-8127-030bce907204" containerID="0a9a4db47b12afa9b75855164f40014d24b8181fc9cb564cb5e5c45ce2f3b98a" exitCode=137 Nov 24 12:16:22 crc kubenswrapper[4782]: I1124 12:16:22.864787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerDied","Data":"0a9a4db47b12afa9b75855164f40014d24b8181fc9cb564cb5e5c45ce2f3b98a"} Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.032924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.196476 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshwk\" (UniqueName: \"kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.196869 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.196924 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.196972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.197180 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.197286 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.197365 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd\") pod \"ac84111a-e67e-4f05-8127-030bce907204\" (UID: \"ac84111a-e67e-4f05-8127-030bce907204\") " Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.197552 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.198095 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.198700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.217582 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts" (OuterVolumeSpecName: "scripts") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.217614 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk" (OuterVolumeSpecName: "kube-api-access-hshwk") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "kube-api-access-hshwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.308927 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac84111a-e67e-4f05-8127-030bce907204-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.308962 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshwk\" (UniqueName: \"kubernetes.io/projected/ac84111a-e67e-4f05-8127-030bce907204-kube-api-access-hshwk\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.308976 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.315397 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.317182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.347942 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data" (OuterVolumeSpecName: "config-data") pod "ac84111a-e67e-4f05-8127-030bce907204" (UID: "ac84111a-e67e-4f05-8127-030bce907204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.410331 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.410582 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.410760 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac84111a-e67e-4f05-8127-030bce907204-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.879429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h75bq" event={"ID":"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37","Type":"ContainerDied","Data":"0173d54fc95aaabb0d8b6fa2bd4e1c12fab4c5a3f66cbea010d1369c3b5edc6f"} Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.879361 4782 generic.go:334] "Generic (PLEG): container finished" podID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" containerID="0173d54fc95aaabb0d8b6fa2bd4e1c12fab4c5a3f66cbea010d1369c3b5edc6f" exitCode=0 Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.886230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac84111a-e67e-4f05-8127-030bce907204","Type":"ContainerDied","Data":"bcbb41a405ba3ec991bfb3c40028cfb83e20c9e679f62657bc8c8543e1be4a3e"} Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.886275 4782 scope.go:117] "RemoveContainer" containerID="0a9a4db47b12afa9b75855164f40014d24b8181fc9cb564cb5e5c45ce2f3b98a" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.886311 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.933911 4782 scope.go:117] "RemoveContainer" containerID="80ef0058aacdff971a17282e82983e2387baa35cb4452c2e1c60ff4314865911" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.943541 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.964432 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.980929 4782 scope.go:117] "RemoveContainer" containerID="b6b6ec85b5080a70e5e2ebd1e1fe214e3a7a764401b06ac2fa73bfcc340f34f2" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.983504 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:23 crc kubenswrapper[4782]: E1124 12:16:23.983922 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="sg-core" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.983938 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="sg-core" Nov 24 12:16:23 crc kubenswrapper[4782]: E1124 12:16:23.983963 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-central-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.983970 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-central-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: E1124 12:16:23.983989 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="proxy-httpd" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.983995 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="proxy-httpd" Nov 24 12:16:23 crc kubenswrapper[4782]: E1124 12:16:23.984002 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-notification-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.984008 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-notification-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.984194 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="sg-core" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.984234 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-notification-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.984269 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="proxy-httpd" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.984285 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac84111a-e67e-4f05-8127-030bce907204" containerName="ceilometer-central-agent" Nov 24 12:16:23 crc kubenswrapper[4782]: I1124 12:16:23.986899 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.000095 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.003766 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.014542 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.027644 4782 scope.go:117] "RemoveContainer" containerID="61fea380e9fb55dc94bdecd83560d210c05cedc3695ce537e8590c6b587b5800" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141693 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2mz\" (UniqueName: \"kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141928 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.141959 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.243117 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.243216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.243907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.243993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.244041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.244119 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2mz\" (UniqueName: \"kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.244144 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.244166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.244467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.249120 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.251518 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.252534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.263980 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2mz\" (UniqueName: \"kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.264003 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts\") pod \"ceilometer-0\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.327143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:24 crc kubenswrapper[4782]: I1124 12:16:24.898391 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.305846 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.467299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle\") pod \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.467439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts\") pod \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.467491 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data\") pod \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.467525 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7sc\" (UniqueName: \"kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc\") pod \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\" (UID: \"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37\") " Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.477306 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts" (OuterVolumeSpecName: "scripts") pod "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" (UID: "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.483598 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc" (OuterVolumeSpecName: "kube-api-access-xb7sc") pod "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" (UID: "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37"). InnerVolumeSpecName "kube-api-access-xb7sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.511757 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" (UID: "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.520238 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac84111a-e67e-4f05-8127-030bce907204" path="/var/lib/kubelet/pods/ac84111a-e67e-4f05-8127-030bce907204/volumes" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.529225 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data" (OuterVolumeSpecName: "config-data") pod "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" (UID: "c358e6ea-3b34-4ec1-ba92-dd7437ccaf37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.569913 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.569946 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.569954 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.570501 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb7sc\" (UniqueName: \"kubernetes.io/projected/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37-kube-api-access-xb7sc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.922587 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h75bq" event={"ID":"c358e6ea-3b34-4ec1-ba92-dd7437ccaf37","Type":"ContainerDied","Data":"26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b"} Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.922634 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f5015cad986b2fa6bd2e563d92ec416fcd64ac890f3bb57d3736453d58553b" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.922701 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h75bq" Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.927465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerStarted","Data":"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea"} Nov 24 12:16:25 crc kubenswrapper[4782]: I1124 12:16:25.927515 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerStarted","Data":"92bb8ab4f1db22bfbc4e50aa91a1ee0478f744a7f87ed83d456ed0a32104f5b0"} Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.073769 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:16:26 crc kubenswrapper[4782]: E1124 12:16:26.074529 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" containerName="nova-cell0-conductor-db-sync" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.074637 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" containerName="nova-cell0-conductor-db-sync" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.074967 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" containerName="nova-cell0-conductor-db-sync" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.076749 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.106685 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bcqdb" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.106972 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.120851 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.194951 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.195266 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6nv\" (UniqueName: \"kubernetes.io/projected/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-kube-api-access-dm6nv\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.195306 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.304134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.304196 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6nv\" (UniqueName: \"kubernetes.io/projected/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-kube-api-access-dm6nv\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.304230 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.310092 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.311772 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.359104 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6nv\" (UniqueName: \"kubernetes.io/projected/79cba7f4-7d61-489a-9c67-41a7a0dc1c28-kube-api-access-dm6nv\") pod \"nova-cell0-conductor-0\" (UID: \"79cba7f4-7d61-489a-9c67-41a7a0dc1c28\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.541499 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.813615 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:16:26 crc kubenswrapper[4782]: W1124 12:16:26.827909 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79cba7f4_7d61_489a_9c67_41a7a0dc1c28.slice/crio-74e5df6a2c77155263d101789a9feb950c53a8beb1386ef388fb5be168e3dd4a WatchSource:0}: Error finding container 74e5df6a2c77155263d101789a9feb950c53a8beb1386ef388fb5be168e3dd4a: Status 404 returned error can't find the container with id 74e5df6a2c77155263d101789a9feb950c53a8beb1386ef388fb5be168e3dd4a Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.939925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerStarted","Data":"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3"} Nov 24 12:16:26 crc kubenswrapper[4782]: I1124 12:16:26.947058 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79cba7f4-7d61-489a-9c67-41a7a0dc1c28","Type":"ContainerStarted","Data":"74e5df6a2c77155263d101789a9feb950c53a8beb1386ef388fb5be168e3dd4a"} Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.662294 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.767906 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6574f9bb76-jkv6h" podUID="41a8247d-b0d2-4a46-b108-bc260db36e11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.957531 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerStarted","Data":"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6"} Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.959084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79cba7f4-7d61-489a-9c67-41a7a0dc1c28","Type":"ContainerStarted","Data":"f990e5a5aa1d49cdac2962285ac401f8507f7e35fe55f0582131c846169247f4"} Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.959213 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:27 crc kubenswrapper[4782]: I1124 12:16:27.978572 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.97855327 podStartE2EDuration="1.97855327s" podCreationTimestamp="2025-11-24 12:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:27.972916598 +0000 UTC m=+1237.216750377" watchObservedRunningTime="2025-11-24 12:16:27.97855327 +0000 UTC m=+1237.222387049" Nov 24 12:16:28 crc kubenswrapper[4782]: I1124 12:16:28.968709 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerStarted","Data":"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b"} Nov 24 12:16:28 crc kubenswrapper[4782]: I1124 12:16:28.993010 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.439725912 podStartE2EDuration="5.992992778s" podCreationTimestamp="2025-11-24 12:16:23 +0000 UTC" firstStartedPulling="2025-11-24 12:16:24.918083172 +0000 UTC m=+1234.161916941" lastFinishedPulling="2025-11-24 12:16:28.471350038 +0000 UTC m=+1237.715183807" observedRunningTime="2025-11-24 12:16:28.991179819 +0000 UTC m=+1238.235013608" watchObservedRunningTime="2025-11-24 12:16:28.992992778 +0000 UTC m=+1238.236826547" Nov 24 12:16:29 crc kubenswrapper[4782]: I1124 12:16:29.978135 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:16:36 crc kubenswrapper[4782]: I1124 12:16:36.566766 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.006861 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hcqdz"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.008300 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.010552 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.017222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.030932 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hcqdz"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.103632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw4p\" (UniqueName: \"kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.103711 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.103759 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.103794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.175579 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.176721 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.178871 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.187533 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.207523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.207796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.207895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.208053 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjw4p\" (UniqueName: \"kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.233516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.240204 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.241867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.246929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjw4p\" (UniqueName: \"kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p\") pod \"nova-cell0-cell-mapping-hcqdz\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.285871 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.287317 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.309830 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.312411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.312500 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxtq\" (UniqueName: \"kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.312546 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.328716 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.393479 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.395022 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.398109 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.415486 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.415955 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416028 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxtq\" (UniqueName: \"kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416067 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dht\" (UniqueName: \"kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416106 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416250 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416427 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.416652 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.421643 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.429933 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.466910 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxtq\" (UniqueName: \"kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq\") pod \"nova-scheduler-0\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.493874 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrkk\" (UniqueName: \"kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524857 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.524988 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.525051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dht\" (UniqueName: \"kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.527783 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.528342 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.535101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.545220 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.578056 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.591420 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.601531 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.603636 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dht\" (UniqueName: \"kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht\") pod \"nova-metadata-0\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.610607 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.626200 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrkk\" (UniqueName: \"kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.629776 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.630276 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.630387 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.638551 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.639838 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.640588 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.650945 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.659695 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrkk\" (UniqueName: \"kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk\") pod \"nova-api-0\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.692365 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.699306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.700298 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.734230 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.734339 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.734421 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5tb\" (UniqueName: \"kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837296 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m8k9\" (UniqueName: \"kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837351 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837453 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837473 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5tb\" (UniqueName: \"kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837533 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.837563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.842912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.846408 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.879731 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5tb\" (UniqueName: \"kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.886943 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m8k9\" (UniqueName: \"kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.940783 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.942284 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.942324 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.943341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.943471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.943481 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.957872 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:37 crc kubenswrapper[4782]: I1124 12:16:37.968814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m8k9\" (UniqueName: \"kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9\") pod \"dnsmasq-dns-865f5d856f-fd9hw\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.037957 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.193078 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hcqdz"] Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.236387 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:38 crc kubenswrapper[4782]: W1124 12:16:38.260249 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fb7825_28ae_412d_b01b_98cb9f74c06e.slice/crio-99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3 WatchSource:0}: Error finding container 99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3: Status 404 returned error can't find the container with id 99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3 Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.494205 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.791299 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.892758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:38 crc kubenswrapper[4782]: I1124 12:16:38.959362 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.107597 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f7aef62-6381-4d0b-b57e-8606bc69fad4","Type":"ContainerStarted","Data":"625e67ac9af2c93d5ef3694354242dd8d357cd1a66ebf5669f2229d7cfcca947"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.111846 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hcqdz" event={"ID":"73fb7825-28ae-412d-b01b-98cb9f74c06e","Type":"ContainerStarted","Data":"c0937b103899b787db16e5c2944bf5ff8af9b48743c0588b359595a4eff78792"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.111888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hcqdz" event={"ID":"73fb7825-28ae-412d-b01b-98cb9f74c06e","Type":"ContainerStarted","Data":"99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.116570 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerStarted","Data":"50793c7a4759fdeaec783fc1784655d9c9f92c471b433b329bd2a7b60b93a43e"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.120496 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" event={"ID":"7f861c7e-382d-4918-a489-9f98dac4a11e","Type":"ContainerStarted","Data":"79a7b8599dce9277e30e458c854bc2cd59975fb713e5d81078e688063299f928"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.124515 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerStarted","Data":"0bac755f26da6cb67c8874d11e6236b70a9c9e6b618160d5845df9a336f59e29"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.133505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da6d72d1-2b6f-4771-a6b8-fb12638a6920","Type":"ContainerStarted","Data":"62e5b3bffe7bfcba73f9a446d8c33a9e270715425ebee9f02f11218ef6bb9622"} Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.135012 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hcqdz" podStartSLOduration=3.134989443 podStartE2EDuration="3.134989443s" podCreationTimestamp="2025-11-24 12:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:39.127594532 +0000 UTC m=+1248.371428301" watchObservedRunningTime="2025-11-24 12:16:39.134989443 +0000 UTC m=+1248.378823212" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.214130 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bbh5l"] Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.232791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.235980 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.236426 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.247129 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bbh5l"] Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.319269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.319341 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpcw\" (UniqueName: \"kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.319391 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.319564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.420997 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.421114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hpcw\" (UniqueName: \"kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.421168 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.421286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.432062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.446292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.458291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.467221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hpcw\" (UniqueName: \"kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw\") pod \"nova-cell1-conductor-db-sync-bbh5l\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:39 crc kubenswrapper[4782]: I1124 12:16:39.642867 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:40 crc kubenswrapper[4782]: I1124 12:16:40.153239 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerID="c7411f041626c76f4b03617b330f4c5764451438a6a523e1864559e99722bca4" exitCode=0 Nov 24 12:16:40 crc kubenswrapper[4782]: I1124 12:16:40.153356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" event={"ID":"7f861c7e-382d-4918-a489-9f98dac4a11e","Type":"ContainerDied","Data":"c7411f041626c76f4b03617b330f4c5764451438a6a523e1864559e99722bca4"} Nov 24 12:16:40 crc kubenswrapper[4782]: I1124 12:16:40.335518 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bbh5l"] Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.174303 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" event={"ID":"84a12501-1d34-4a24-b2ad-4c932e61a478","Type":"ContainerStarted","Data":"549e8446cc35343fbfa718f9059dcb85e44b0a4cdfa044b27a2a486ca1a6c63a"} Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.174668 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" event={"ID":"84a12501-1d34-4a24-b2ad-4c932e61a478","Type":"ContainerStarted","Data":"cdd9c7bbca0817aa1d9e1e5968873f2ea0c1c5c5f365c511bcd52db667e2a0a8"} Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.203667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" event={"ID":"7f861c7e-382d-4918-a489-9f98dac4a11e","Type":"ContainerStarted","Data":"8a51cd1cfc8f64974b144de3611fdd6f8f2e54414fb6ae9ee29a44992a51af99"} Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.204334 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.210076 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" podStartSLOduration=2.21004464 podStartE2EDuration="2.21004464s" podCreationTimestamp="2025-11-24 12:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:41.206821722 +0000 UTC m=+1250.450655491" watchObservedRunningTime="2025-11-24 12:16:41.21004464 +0000 UTC m=+1250.453878409" Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.254351 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" podStartSLOduration=4.25432289 podStartE2EDuration="4.25432289s" podCreationTimestamp="2025-11-24 12:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:41.249594892 +0000 UTC m=+1250.493428681" watchObservedRunningTime="2025-11-24 12:16:41.25432289 +0000 UTC m=+1250.498156679" Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.742744 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:16:41 crc kubenswrapper[4782]: I1124 12:16:41.756744 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:42 crc kubenswrapper[4782]: I1124 12:16:42.361601 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:16:42 crc kubenswrapper[4782]: I1124 12:16:42.618762 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.268489 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f7aef62-6381-4d0b-b57e-8606bc69fad4","Type":"ContainerStarted","Data":"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.275206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerStarted","Data":"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.275266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerStarted","Data":"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.278261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerStarted","Data":"6378f072ef303144c1ee0baca1bf713741fdcaa127c6ed949c0beb0f2fd6f212"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.278523 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerStarted","Data":"5b6360d9cd9365b8723a869bae7714bb422fad357eb33cfe9ddd7ea78414daa9"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.279465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da6d72d1-2b6f-4771-a6b8-fb12638a6920","Type":"ContainerStarted","Data":"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464"} Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.279604 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464" gracePeriod=30 Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.294285 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.184586757 podStartE2EDuration="8.294267279s" podCreationTimestamp="2025-11-24 12:16:37 +0000 UTC" firstStartedPulling="2025-11-24 12:16:38.303179055 +0000 UTC m=+1247.547012824" lastFinishedPulling="2025-11-24 12:16:44.412859577 +0000 UTC m=+1253.656693346" observedRunningTime="2025-11-24 12:16:45.283476197 +0000 UTC m=+1254.527309966" watchObservedRunningTime="2025-11-24 12:16:45.294267279 +0000 UTC m=+1254.538101048" Nov 24 12:16:45 crc kubenswrapper[4782]: I1124 12:16:45.302799 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.780036908 podStartE2EDuration="8.30277541s" podCreationTimestamp="2025-11-24 12:16:37 +0000 UTC" firstStartedPulling="2025-11-24 12:16:38.890644809 +0000 UTC m=+1248.134478578" lastFinishedPulling="2025-11-24 12:16:44.413383311 +0000 UTC m=+1253.657217080" observedRunningTime="2025-11-24 12:16:45.300128438 +0000 UTC m=+1254.543962207" watchObservedRunningTime="2025-11-24 12:16:45.30277541 +0000 UTC m=+1254.546609189" Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.212126 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.212857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6574f9bb76-jkv6h" Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.315051 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-log" containerID="cri-o://5b6360d9cd9365b8723a869bae7714bb422fad357eb33cfe9ddd7ea78414daa9" gracePeriod=30 Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.320585 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-metadata" containerID="cri-o://6378f072ef303144c1ee0baca1bf713741fdcaa127c6ed949c0beb0f2fd6f212" gracePeriod=30 Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.322742 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.322979 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon-log" containerID="cri-o://64cbbdb567acf5e868c8f354beb70b099e03307cd06d11f8948a9b2d2ca6c089" gracePeriod=30 Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.323038 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" containerID="cri-o://d98871cbaa1f6df1cde30a5a084019eaa955a9a85c74ff9b6bce5fc415979ee1" gracePeriod=30 Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.354771 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.83002779 podStartE2EDuration="9.354749906s" podCreationTimestamp="2025-11-24 12:16:37 +0000 UTC" firstStartedPulling="2025-11-24 12:16:38.893430215 +0000 UTC m=+1248.137263984" lastFinishedPulling="2025-11-24 12:16:44.418152331 +0000 UTC m=+1253.661986100" observedRunningTime="2025-11-24 12:16:46.352491184 +0000 UTC m=+1255.596324953" watchObservedRunningTime="2025-11-24 12:16:46.354749906 +0000 UTC m=+1255.598583665" Nov 24 12:16:46 crc kubenswrapper[4782]: I1124 12:16:46.402596 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.497924858 podStartE2EDuration="9.402576492s" podCreationTimestamp="2025-11-24 12:16:37 +0000 UTC" firstStartedPulling="2025-11-24 12:16:38.50958581 +0000 UTC m=+1247.753419579" lastFinishedPulling="2025-11-24 12:16:44.414237444 +0000 UTC m=+1253.658071213" observedRunningTime="2025-11-24 12:16:46.386664391 +0000 UTC m=+1255.630498160" watchObservedRunningTime="2025-11-24 12:16:46.402576492 +0000 UTC m=+1255.646410261" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.320627 4782 generic.go:334] "Generic (PLEG): container finished" podID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerID="6378f072ef303144c1ee0baca1bf713741fdcaa127c6ed949c0beb0f2fd6f212" exitCode=0 Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.320955 4782 generic.go:334] "Generic (PLEG): container finished" podID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerID="5b6360d9cd9365b8723a869bae7714bb422fad357eb33cfe9ddd7ea78414daa9" exitCode=143 Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.320981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerDied","Data":"6378f072ef303144c1ee0baca1bf713741fdcaa127c6ed949c0beb0f2fd6f212"} Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.321011 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerDied","Data":"5b6360d9cd9365b8723a869bae7714bb422fad357eb33cfe9ddd7ea78414daa9"} Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.321024 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e522b2c2-c5fa-450d-af59-d9d9b0855e84","Type":"ContainerDied","Data":"0bac755f26da6cb67c8874d11e6236b70a9c9e6b618160d5845df9a336f59e29"} Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.321038 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bac755f26da6cb67c8874d11e6236b70a9c9e6b618160d5845df9a336f59e29" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.355096 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.418463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4dht\" (UniqueName: \"kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht\") pod \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.419422 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs\") pod \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.419464 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") pod \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.419569 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data\") pod \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.420195 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs" (OuterVolumeSpecName: "logs") pod "e522b2c2-c5fa-450d-af59-d9d9b0855e84" (UID: "e522b2c2-c5fa-450d-af59-d9d9b0855e84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.433769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht" (OuterVolumeSpecName: "kube-api-access-n4dht") pod "e522b2c2-c5fa-450d-af59-d9d9b0855e84" (UID: "e522b2c2-c5fa-450d-af59-d9d9b0855e84"). InnerVolumeSpecName "kube-api-access-n4dht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:47 crc kubenswrapper[4782]: E1124 12:16:47.450798 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle podName:e522b2c2-c5fa-450d-af59-d9d9b0855e84 nodeName:}" failed. No retries permitted until 2025-11-24 12:16:47.950722994 +0000 UTC m=+1257.194556773 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle") pod "e522b2c2-c5fa-450d-af59-d9d9b0855e84" (UID: "e522b2c2-c5fa-450d-af59-d9d9b0855e84") : error deleting /var/lib/kubelet/pods/e522b2c2-c5fa-450d-af59-d9d9b0855e84/volume-subpaths: remove /var/lib/kubelet/pods/e522b2c2-c5fa-450d-af59-d9d9b0855e84/volume-subpaths: no such file or directory Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.453666 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data" (OuterVolumeSpecName: "config-data") pod "e522b2c2-c5fa-450d-af59-d9d9b0855e84" (UID: "e522b2c2-c5fa-450d-af59-d9d9b0855e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.508287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.508496 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.521695 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4dht\" (UniqueName: \"kubernetes.io/projected/e522b2c2-c5fa-450d-af59-d9d9b0855e84-kube-api-access-n4dht\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.521728 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e522b2c2-c5fa-450d-af59-d9d9b0855e84-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.521739 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.530834 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.844293 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.844345 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:16:47 crc kubenswrapper[4782]: I1124 12:16:47.958708 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.030429 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") pod \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\" (UID: \"e522b2c2-c5fa-450d-af59-d9d9b0855e84\") " Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.050583 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e522b2c2-c5fa-450d-af59-d9d9b0855e84" (UID: "e522b2c2-c5fa-450d-af59-d9d9b0855e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.052602 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.136340 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.136829 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="dnsmasq-dns" containerID="cri-o://bd9d61de05614afc65b4c05996e406311f74abe42f44f4ecf3eb8d9612f2add6" gracePeriod=10 Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.136365 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e522b2c2-c5fa-450d-af59-d9d9b0855e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.351495 4782 generic.go:334] "Generic (PLEG): container finished" podID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerID="bd9d61de05614afc65b4c05996e406311f74abe42f44f4ecf3eb8d9612f2add6" exitCode=0 Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.351612 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.358495 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" event={"ID":"cc6257af-a928-420b-a8cb-4a174b2a5776","Type":"ContainerDied","Data":"bd9d61de05614afc65b4c05996e406311f74abe42f44f4ecf3eb8d9612f2add6"} Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.435693 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.468276 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.468407 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.491234 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:48 crc kubenswrapper[4782]: E1124 12:16:48.491621 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-log" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.491645 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-log" Nov 24 12:16:48 crc kubenswrapper[4782]: E1124 12:16:48.491666 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-metadata" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.491672 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-metadata" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.491864 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-log" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.491889 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" containerName="nova-metadata-metadata" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.492823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.501423 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.504125 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.504280 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.555082 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.555117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.555138 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.555160 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.555427 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rpj2\" (UniqueName: \"kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.658089 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.658145 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.658178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.658207 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.658312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rpj2\" (UniqueName: \"kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.660764 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.670638 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.678194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.687047 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.692928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rpj2\" (UniqueName: \"kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2\") pod \"nova-metadata-0\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.822823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.926655 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:16:48 crc kubenswrapper[4782]: I1124 12:16:48.926925 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.002218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.189244 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.189718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.189827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc58r\" (UniqueName: \"kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.189860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.190025 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.190052 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb\") pod \"cc6257af-a928-420b-a8cb-4a174b2a5776\" (UID: \"cc6257af-a928-420b-a8cb-4a174b2a5776\") " Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.207969 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r" (OuterVolumeSpecName: "kube-api-access-pc58r") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "kube-api-access-pc58r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.265676 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.277019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.302724 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc58r\" (UniqueName: \"kubernetes.io/projected/cc6257af-a928-420b-a8cb-4a174b2a5776-kube-api-access-pc58r\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.302758 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.302766 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.303765 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.323896 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.345062 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config" (OuterVolumeSpecName: "config") pod "cc6257af-a928-420b-a8cb-4a174b2a5776" (UID: "cc6257af-a928-420b-a8cb-4a174b2a5776"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.363909 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" event={"ID":"cc6257af-a928-420b-a8cb-4a174b2a5776","Type":"ContainerDied","Data":"53f44ad998adb08a0f7bca42beaa1e129ede4437a69216c4ebf181712a8a9a1a"} Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.363957 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-xs4pw" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.363998 4782 scope.go:117] "RemoveContainer" containerID="bd9d61de05614afc65b4c05996e406311f74abe42f44f4ecf3eb8d9612f2add6" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.406836 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.407058 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.407119 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6257af-a928-420b-a8cb-4a174b2a5776-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.416578 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.425115 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-xs4pw"] Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.504618 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" path="/var/lib/kubelet/pods/cc6257af-a928-420b-a8cb-4a174b2a5776/volumes" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.505506 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e522b2c2-c5fa-450d-af59-d9d9b0855e84" path="/var/lib/kubelet/pods/e522b2c2-c5fa-450d-af59-d9d9b0855e84/volumes" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.591800 4782 scope.go:117] "RemoveContainer" containerID="6ca325d700f653a70d84930b48978ea7613d06b9fbef9bcec2909af6e729bf38" Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.618651 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47652->10.217.0.145:8443: read: connection reset by peer" Nov 24 12:16:49 crc kubenswrapper[4782]: W1124 12:16:49.630163 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b33c19b_ed62_4761_a4a5_c500e83337e8.slice/crio-e52fdd504e4d09708ae9e3d79e3cc452daa61c6fb22f48efdb73d7e0d8f8e74a WatchSource:0}: Error finding container e52fdd504e4d09708ae9e3d79e3cc452daa61c6fb22f48efdb73d7e0d8f8e74a: Status 404 returned error can't find the container with id e52fdd504e4d09708ae9e3d79e3cc452daa61c6fb22f48efdb73d7e0d8f8e74a Nov 24 12:16:49 crc kubenswrapper[4782]: I1124 12:16:49.638577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.379304 4782 generic.go:334] "Generic (PLEG): container finished" podID="b6cd757b-7259-4caf-b928-2dc936c99028" containerID="d98871cbaa1f6df1cde30a5a084019eaa955a9a85c74ff9b6bce5fc415979ee1" exitCode=0 Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.379391 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerDied","Data":"d98871cbaa1f6df1cde30a5a084019eaa955a9a85c74ff9b6bce5fc415979ee1"} Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.379975 4782 scope.go:117] "RemoveContainer" containerID="fe33b33da506efdf8f0ec330790ceaef82fa73fd0855882c4ad104afd14f7bbc" Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.385297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerStarted","Data":"ef759dfbc7184f645db9bc174bbeb98b6bb7964cd6160d8d2267b5b471584942"} Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.385336 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerStarted","Data":"f9a121ee95b4159b2ab3bb1bd38667c8615798b0bb9a71d7934a6316c1d07dbc"} Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.385347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerStarted","Data":"e52fdd504e4d09708ae9e3d79e3cc452daa61c6fb22f48efdb73d7e0d8f8e74a"} Nov 24 12:16:50 crc kubenswrapper[4782]: I1124 12:16:50.406729 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.40671162 podStartE2EDuration="2.40671162s" podCreationTimestamp="2025-11-24 12:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:50.401269103 +0000 UTC m=+1259.645102882" watchObservedRunningTime="2025-11-24 12:16:50.40671162 +0000 UTC m=+1259.650545389" Nov 24 12:16:51 crc kubenswrapper[4782]: I1124 12:16:51.394864 4782 generic.go:334] "Generic (PLEG): container finished" podID="73fb7825-28ae-412d-b01b-98cb9f74c06e" containerID="c0937b103899b787db16e5c2944bf5ff8af9b48743c0588b359595a4eff78792" exitCode=0 Nov 24 12:16:51 crc kubenswrapper[4782]: I1124 12:16:51.394948 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hcqdz" event={"ID":"73fb7825-28ae-412d-b01b-98cb9f74c06e","Type":"ContainerDied","Data":"c0937b103899b787db16e5c2944bf5ff8af9b48743c0588b359595a4eff78792"} Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.792454 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.889254 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data\") pod \"73fb7825-28ae-412d-b01b-98cb9f74c06e\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.889336 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle\") pod \"73fb7825-28ae-412d-b01b-98cb9f74c06e\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.890198 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts\") pod \"73fb7825-28ae-412d-b01b-98cb9f74c06e\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.890752 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjw4p\" (UniqueName: \"kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p\") pod \"73fb7825-28ae-412d-b01b-98cb9f74c06e\" (UID: \"73fb7825-28ae-412d-b01b-98cb9f74c06e\") " Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.897259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p" (OuterVolumeSpecName: "kube-api-access-vjw4p") pod "73fb7825-28ae-412d-b01b-98cb9f74c06e" (UID: "73fb7825-28ae-412d-b01b-98cb9f74c06e"). InnerVolumeSpecName "kube-api-access-vjw4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.898486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts" (OuterVolumeSpecName: "scripts") pod "73fb7825-28ae-412d-b01b-98cb9f74c06e" (UID: "73fb7825-28ae-412d-b01b-98cb9f74c06e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.928472 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data" (OuterVolumeSpecName: "config-data") pod "73fb7825-28ae-412d-b01b-98cb9f74c06e" (UID: "73fb7825-28ae-412d-b01b-98cb9f74c06e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.932758 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73fb7825-28ae-412d-b01b-98cb9f74c06e" (UID: "73fb7825-28ae-412d-b01b-98cb9f74c06e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.993290 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.993330 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.993343 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fb7825-28ae-412d-b01b-98cb9f74c06e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:52 crc kubenswrapper[4782]: I1124 12:16:52.993352 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjw4p\" (UniqueName: \"kubernetes.io/projected/73fb7825-28ae-412d-b01b-98cb9f74c06e-kube-api-access-vjw4p\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.416332 4782 generic.go:334] "Generic (PLEG): container finished" podID="84a12501-1d34-4a24-b2ad-4c932e61a478" containerID="549e8446cc35343fbfa718f9059dcb85e44b0a4cdfa044b27a2a486ca1a6c63a" exitCode=0 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.416605 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" event={"ID":"84a12501-1d34-4a24-b2ad-4c932e61a478","Type":"ContainerDied","Data":"549e8446cc35343fbfa718f9059dcb85e44b0a4cdfa044b27a2a486ca1a6c63a"} Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.419201 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hcqdz" event={"ID":"73fb7825-28ae-412d-b01b-98cb9f74c06e","Type":"ContainerDied","Data":"99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3"} Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.419311 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c3f7b41349462ef2a4bef26a5d32c5918227ead8313374913a67198b3c45e3" Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.419453 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hcqdz" Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.609732 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.610069 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-log" containerID="cri-o://b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7" gracePeriod=30 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.610252 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-api" containerID="cri-o://053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a" gracePeriod=30 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.652203 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.652585 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerName="nova-scheduler-scheduler" containerID="cri-o://874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" gracePeriod=30 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.663690 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.663930 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-log" containerID="cri-o://f9a121ee95b4159b2ab3bb1bd38667c8615798b0bb9a71d7934a6316c1d07dbc" gracePeriod=30 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.664344 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-metadata" containerID="cri-o://ef759dfbc7184f645db9bc174bbeb98b6bb7964cd6160d8d2267b5b471584942" gracePeriod=30 Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.824036 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:16:53 crc kubenswrapper[4782]: I1124 12:16:53.824088 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.375836 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.435798 4782 generic.go:334] "Generic (PLEG): container finished" podID="03383e51-418d-488c-8075-6238d0971e34" containerID="b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7" exitCode=143 Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.435853 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerDied","Data":"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7"} Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.438709 4782 generic.go:334] "Generic (PLEG): container finished" podID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerID="ef759dfbc7184f645db9bc174bbeb98b6bb7964cd6160d8d2267b5b471584942" exitCode=0 Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.438743 4782 generic.go:334] "Generic (PLEG): container finished" podID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerID="f9a121ee95b4159b2ab3bb1bd38667c8615798b0bb9a71d7934a6316c1d07dbc" exitCode=143 Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.438908 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerDied","Data":"ef759dfbc7184f645db9bc174bbeb98b6bb7964cd6160d8d2267b5b471584942"} Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.438933 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerDied","Data":"f9a121ee95b4159b2ab3bb1bd38667c8615798b0bb9a71d7934a6316c1d07dbc"} Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.554789 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.729500 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle\") pod \"3b33c19b-ed62-4761-a4a5-c500e83337e8\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.729716 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rpj2\" (UniqueName: \"kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2\") pod \"3b33c19b-ed62-4761-a4a5-c500e83337e8\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.729794 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs\") pod \"3b33c19b-ed62-4761-a4a5-c500e83337e8\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.729835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data\") pod \"3b33c19b-ed62-4761-a4a5-c500e83337e8\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.729901 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs\") pod \"3b33c19b-ed62-4761-a4a5-c500e83337e8\" (UID: \"3b33c19b-ed62-4761-a4a5-c500e83337e8\") " Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.730234 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs" (OuterVolumeSpecName: "logs") pod "3b33c19b-ed62-4761-a4a5-c500e83337e8" (UID: "3b33c19b-ed62-4761-a4a5-c500e83337e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.730562 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b33c19b-ed62-4761-a4a5-c500e83337e8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.734507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2" (OuterVolumeSpecName: "kube-api-access-4rpj2") pod "3b33c19b-ed62-4761-a4a5-c500e83337e8" (UID: "3b33c19b-ed62-4761-a4a5-c500e83337e8"). InnerVolumeSpecName "kube-api-access-4rpj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.813329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b33c19b-ed62-4761-a4a5-c500e83337e8" (UID: "3b33c19b-ed62-4761-a4a5-c500e83337e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.832639 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.832686 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rpj2\" (UniqueName: \"kubernetes.io/projected/3b33c19b-ed62-4761-a4a5-c500e83337e8-kube-api-access-4rpj2\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.864621 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data" (OuterVolumeSpecName: "config-data") pod "3b33c19b-ed62-4761-a4a5-c500e83337e8" (UID: "3b33c19b-ed62-4761-a4a5-c500e83337e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.890388 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3b33c19b-ed62-4761-a4a5-c500e83337e8" (UID: "3b33c19b-ed62-4761-a4a5-c500e83337e8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.934192 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.934225 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b33c19b-ed62-4761-a4a5-c500e83337e8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:54 crc kubenswrapper[4782]: I1124 12:16:54.960156 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.139329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts\") pod \"84a12501-1d34-4a24-b2ad-4c932e61a478\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.139459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data\") pod \"84a12501-1d34-4a24-b2ad-4c932e61a478\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.139514 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle\") pod \"84a12501-1d34-4a24-b2ad-4c932e61a478\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.139541 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hpcw\" (UniqueName: \"kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw\") pod \"84a12501-1d34-4a24-b2ad-4c932e61a478\" (UID: \"84a12501-1d34-4a24-b2ad-4c932e61a478\") " Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.144002 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw" (OuterVolumeSpecName: "kube-api-access-2hpcw") pod "84a12501-1d34-4a24-b2ad-4c932e61a478" (UID: "84a12501-1d34-4a24-b2ad-4c932e61a478"). InnerVolumeSpecName "kube-api-access-2hpcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.144099 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts" (OuterVolumeSpecName: "scripts") pod "84a12501-1d34-4a24-b2ad-4c932e61a478" (UID: "84a12501-1d34-4a24-b2ad-4c932e61a478"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.165748 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a12501-1d34-4a24-b2ad-4c932e61a478" (UID: "84a12501-1d34-4a24-b2ad-4c932e61a478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.166274 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data" (OuterVolumeSpecName: "config-data") pod "84a12501-1d34-4a24-b2ad-4c932e61a478" (UID: "84a12501-1d34-4a24-b2ad-4c932e61a478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.241864 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.241892 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.241903 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hpcw\" (UniqueName: \"kubernetes.io/projected/84a12501-1d34-4a24-b2ad-4c932e61a478-kube-api-access-2hpcw\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.241911 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a12501-1d34-4a24-b2ad-4c932e61a478-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.449044 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.449029 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bbh5l" event={"ID":"84a12501-1d34-4a24-b2ad-4c932e61a478","Type":"ContainerDied","Data":"cdd9c7bbca0817aa1d9e1e5968873f2ea0c1c5c5f365c511bcd52db667e2a0a8"} Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.449497 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd9c7bbca0817aa1d9e1e5968873f2ea0c1c5c5f365c511bcd52db667e2a0a8" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.452461 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b33c19b-ed62-4761-a4a5-c500e83337e8","Type":"ContainerDied","Data":"e52fdd504e4d09708ae9e3d79e3cc452daa61c6fb22f48efdb73d7e0d8f8e74a"} Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.452511 4782 scope.go:117] "RemoveContainer" containerID="ef759dfbc7184f645db9bc174bbeb98b6bb7964cd6160d8d2267b5b471584942" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.452712 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.481005 4782 scope.go:117] "RemoveContainer" containerID="f9a121ee95b4159b2ab3bb1bd38667c8615798b0bb9a71d7934a6316c1d07dbc" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.515212 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.536108 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540560 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.540894 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-log" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540909 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-log" Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.540931 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fb7825-28ae-412d-b01b-98cb9f74c06e" containerName="nova-manage" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540938 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fb7825-28ae-412d-b01b-98cb9f74c06e" containerName="nova-manage" Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.540953 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a12501-1d34-4a24-b2ad-4c932e61a478" containerName="nova-cell1-conductor-db-sync" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540958 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a12501-1d34-4a24-b2ad-4c932e61a478" containerName="nova-cell1-conductor-db-sync" Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.540967 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-metadata" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540973 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-metadata" Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.540985 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="dnsmasq-dns" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.540991 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="dnsmasq-dns" Nov 24 12:16:55 crc kubenswrapper[4782]: E1124 12:16:55.541002 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="init" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541010 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="init" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541208 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fb7825-28ae-412d-b01b-98cb9f74c06e" containerName="nova-manage" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541225 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a12501-1d34-4a24-b2ad-4c932e61a478" containerName="nova-cell1-conductor-db-sync" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541232 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-metadata" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541240 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6257af-a928-420b-a8cb-4a174b2a5776" containerName="dnsmasq-dns" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.541255 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" containerName="nova-metadata-log" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.542557 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.547215 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.547478 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.555902 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.557553 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.559722 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.574222 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.594238 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.648467 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.648505 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnrb\" (UniqueName: \"kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.648564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.648602 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.648638 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750229 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750301 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnrb\" (UniqueName: \"kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750346 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjpp\" (UniqueName: \"kubernetes.io/projected/a1d44b32-67f5-4294-962e-e4c2821714f0-kube-api-access-twjpp\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750545 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750607 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.750777 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.754631 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.754637 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.755363 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.768321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnrb\" (UniqueName: \"kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb\") pod \"nova-metadata-0\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.852134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.852244 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjpp\" (UniqueName: \"kubernetes.io/projected/a1d44b32-67f5-4294-962e-e4c2821714f0-kube-api-access-twjpp\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.852562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.855931 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.856617 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d44b32-67f5-4294-962e-e4c2821714f0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.870007 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjpp\" (UniqueName: \"kubernetes.io/projected/a1d44b32-67f5-4294-962e-e4c2821714f0-kube-api-access-twjpp\") pod \"nova-cell1-conductor-0\" (UID: \"a1d44b32-67f5-4294-962e-e4c2821714f0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.872067 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:16:55 crc kubenswrapper[4782]: I1124 12:16:55.884821 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:56 crc kubenswrapper[4782]: I1124 12:16:56.377777 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:16:56 crc kubenswrapper[4782]: W1124 12:16:56.378752 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1d44b32_67f5_4294_962e_e4c2821714f0.slice/crio-d0fdfd34c7c5e894ba6f6c010ae04f5055418072d0c45b436a0ccc3f16cb6a28 WatchSource:0}: Error finding container d0fdfd34c7c5e894ba6f6c010ae04f5055418072d0c45b436a0ccc3f16cb6a28: Status 404 returned error can't find the container with id d0fdfd34c7c5e894ba6f6c010ae04f5055418072d0c45b436a0ccc3f16cb6a28 Nov 24 12:16:56 crc kubenswrapper[4782]: I1124 12:16:56.388587 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:16:56 crc kubenswrapper[4782]: I1124 12:16:56.468192 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a1d44b32-67f5-4294-962e-e4c2821714f0","Type":"ContainerStarted","Data":"d0fdfd34c7c5e894ba6f6c010ae04f5055418072d0c45b436a0ccc3f16cb6a28"} Nov 24 12:16:56 crc kubenswrapper[4782]: I1124 12:16:56.474343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerStarted","Data":"8047a6955c3b6baada14d23b03cc955e0b4b6c5b7cf18264eee4ada7246f6d68"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.419661 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.493854 4782 generic.go:334] "Generic (PLEG): container finished" podID="03383e51-418d-488c-8075-6238d0971e34" containerID="053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a" exitCode=0 Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.494022 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.498057 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.500363 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.511405 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b33c19b-ed62-4761-a4a5-c500e83337e8" path="/var/lib/kubelet/pods/3b33c19b-ed62-4761-a4a5-c500e83337e8/volumes" Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.511742 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.511808 4782 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerName="nova-scheduler-scheduler" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.514850 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerDied","Data":"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.514888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03383e51-418d-488c-8075-6238d0971e34","Type":"ContainerDied","Data":"50793c7a4759fdeaec783fc1784655d9c9f92c471b433b329bd2a7b60b93a43e"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.514907 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.514918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a1d44b32-67f5-4294-962e-e4c2821714f0","Type":"ContainerStarted","Data":"1ed0f93c9368a93f009925e2181a9c448422dca1372c574d86226dfdcfa68e87"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.514941 4782 scope.go:117] "RemoveContainer" containerID="053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.515739 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerStarted","Data":"28573096fb3dbc569157bd2b3ffa7f7645e8e6ee6f6f5fd30a5f825c108145b0"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.515764 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerStarted","Data":"821519c3aaa22322fd1c6d244ce2270836c6fa8a17b130e5fefc22b8fd145fd4"} Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.530290 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.530269745 podStartE2EDuration="2.530269745s" podCreationTimestamp="2025-11-24 12:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:57.523966094 +0000 UTC m=+1266.767799863" watchObservedRunningTime="2025-11-24 12:16:57.530269745 +0000 UTC m=+1266.774103524" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.546870 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5468495840000003 podStartE2EDuration="2.546849584s" podCreationTimestamp="2025-11-24 12:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:57.542662821 +0000 UTC m=+1266.786496590" watchObservedRunningTime="2025-11-24 12:16:57.546849584 +0000 UTC m=+1266.790683363" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.556278 4782 scope.go:117] "RemoveContainer" containerID="b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.587063 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkrkk\" (UniqueName: \"kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk\") pod \"03383e51-418d-488c-8075-6238d0971e34\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.587246 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data\") pod \"03383e51-418d-488c-8075-6238d0971e34\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.587319 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs\") pod \"03383e51-418d-488c-8075-6238d0971e34\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.587511 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle\") pod \"03383e51-418d-488c-8075-6238d0971e34\" (UID: \"03383e51-418d-488c-8075-6238d0971e34\") " Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.588998 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs" (OuterVolumeSpecName: "logs") pod "03383e51-418d-488c-8075-6238d0971e34" (UID: "03383e51-418d-488c-8075-6238d0971e34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.590735 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03383e51-418d-488c-8075-6238d0971e34-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.592908 4782 scope.go:117] "RemoveContainer" containerID="053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a" Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.594298 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a\": container with ID starting with 053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a not found: ID does not exist" containerID="053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.594348 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a"} err="failed to get container status \"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a\": rpc error: code = NotFound desc = could not find container \"053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a\": container with ID starting with 053c1f1a322a8fa8a1c98e3651bcc13151702211ef434e983e2e460b6cb30b6a not found: ID does not exist" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.594400 4782 scope.go:117] "RemoveContainer" containerID="b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7" Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.594666 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7\": container with ID starting with b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7 not found: ID does not exist" containerID="b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.594691 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7"} err="failed to get container status \"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7\": rpc error: code = NotFound desc = could not find container \"b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7\": container with ID starting with b17e963e2919a4749256f969e7f660ec5a4aaec540a08553a44b988b70fac3f7 not found: ID does not exist" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.600839 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk" (OuterVolumeSpecName: "kube-api-access-gkrkk") pod "03383e51-418d-488c-8075-6238d0971e34" (UID: "03383e51-418d-488c-8075-6238d0971e34"). InnerVolumeSpecName "kube-api-access-gkrkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.634684 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data" (OuterVolumeSpecName: "config-data") pod "03383e51-418d-488c-8075-6238d0971e34" (UID: "03383e51-418d-488c-8075-6238d0971e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.660873 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.678853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03383e51-418d-488c-8075-6238d0971e34" (UID: "03383e51-418d-488c-8075-6238d0971e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.693675 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.693704 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03383e51-418d-488c-8075-6238d0971e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.693719 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkrkk\" (UniqueName: \"kubernetes.io/projected/03383e51-418d-488c-8075-6238d0971e34-kube-api-access-gkrkk\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.842689 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.855799 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.870418 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.870801 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-api" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.870821 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-api" Nov 24 12:16:57 crc kubenswrapper[4782]: E1124 12:16:57.870858 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-log" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.870865 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-log" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.871052 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-log" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.871113 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03383e51-418d-488c-8075-6238d0971e34" containerName="nova-api-api" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.876728 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.880927 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.882319 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.901621 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.901680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.901727 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bmg\" (UniqueName: \"kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:57 crc kubenswrapper[4782]: I1124 12:16:57.901774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.004027 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.004083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.004144 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bmg\" (UniqueName: \"kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.004202 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.004702 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.009169 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.014048 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.043085 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bmg\" (UniqueName: \"kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg\") pod \"nova-api-0\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.195424 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.469889 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.529336 4782 generic.go:334] "Generic (PLEG): container finished" podID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" exitCode=0 Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.530055 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.530395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f7aef62-6381-4d0b-b57e-8606bc69fad4","Type":"ContainerDied","Data":"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7"} Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.530432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f7aef62-6381-4d0b-b57e-8606bc69fad4","Type":"ContainerDied","Data":"625e67ac9af2c93d5ef3694354242dd8d357cd1a66ebf5669f2229d7cfcca947"} Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.530451 4782 scope.go:117] "RemoveContainer" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.563285 4782 scope.go:117] "RemoveContainer" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" Nov 24 12:16:58 crc kubenswrapper[4782]: E1124 12:16:58.564350 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7\": container with ID starting with 874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7 not found: ID does not exist" containerID="874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.564416 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7"} err="failed to get container status \"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7\": rpc error: code = NotFound desc = could not find container \"874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7\": container with ID starting with 874fc6b150557cbbb53df16807605d2919ba5f131b567e5c8edf0eea303d44e7 not found: ID does not exist" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.629683 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle\") pod \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.630393 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxtq\" (UniqueName: \"kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq\") pod \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.630427 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data\") pod \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\" (UID: \"4f7aef62-6381-4d0b-b57e-8606bc69fad4\") " Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.639344 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq" (OuterVolumeSpecName: "kube-api-access-qkxtq") pod "4f7aef62-6381-4d0b-b57e-8606bc69fad4" (UID: "4f7aef62-6381-4d0b-b57e-8606bc69fad4"). InnerVolumeSpecName "kube-api-access-qkxtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.668283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data" (OuterVolumeSpecName: "config-data") pod "4f7aef62-6381-4d0b-b57e-8606bc69fad4" (UID: "4f7aef62-6381-4d0b-b57e-8606bc69fad4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.675894 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f7aef62-6381-4d0b-b57e-8606bc69fad4" (UID: "4f7aef62-6381-4d0b-b57e-8606bc69fad4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.732065 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.732102 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkxtq\" (UniqueName: \"kubernetes.io/projected/4f7aef62-6381-4d0b-b57e-8606bc69fad4-kube-api-access-qkxtq\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.732113 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7aef62-6381-4d0b-b57e-8606bc69fad4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.779485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:16:58 crc kubenswrapper[4782]: W1124 12:16:58.784662 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fa76cd5_9bfb_46c9_b5c2_6a59a81487d5.slice/crio-5a008c073eaf2c20a1d0c062b5a10c23049a20f02ac35786cbfdfb7b48f87de8 WatchSource:0}: Error finding container 5a008c073eaf2c20a1d0c062b5a10c23049a20f02ac35786cbfdfb7b48f87de8: Status 404 returned error can't find the container with id 5a008c073eaf2c20a1d0c062b5a10c23049a20f02ac35786cbfdfb7b48f87de8 Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.905140 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.905360 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" containerName="kube-state-metrics" containerID="cri-o://9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9" gracePeriod=30 Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.927685 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.942243 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.958442 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:58 crc kubenswrapper[4782]: E1124 12:16:58.958791 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerName="nova-scheduler-scheduler" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.958806 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerName="nova-scheduler-scheduler" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.958971 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" containerName="nova-scheduler-scheduler" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.959542 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.963623 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:16:58 crc kubenswrapper[4782]: I1124 12:16:58.977040 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.144125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.144596 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wnc\" (UniqueName: \"kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.144730 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.251290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.251351 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wnc\" (UniqueName: \"kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.251407 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.254963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.254995 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.273982 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wnc\" (UniqueName: \"kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc\") pod \"nova-scheduler-0\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.313603 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.445466 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.457080 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b46n\" (UniqueName: \"kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n\") pod \"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2\" (UID: \"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2\") " Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.461365 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n" (OuterVolumeSpecName: "kube-api-access-4b46n") pod "1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" (UID: "1c21319e-8ce0-4e9d-87e5-abaa9e51eae2"). InnerVolumeSpecName "kube-api-access-4b46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.502690 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03383e51-418d-488c-8075-6238d0971e34" path="/var/lib/kubelet/pods/03383e51-418d-488c-8075-6238d0971e34/volumes" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.503339 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7aef62-6381-4d0b-b57e-8606bc69fad4" path="/var/lib/kubelet/pods/4f7aef62-6381-4d0b-b57e-8606bc69fad4/volumes" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.554282 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" containerID="9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9" exitCode=2 Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.554352 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2","Type":"ContainerDied","Data":"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9"} Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.554402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c21319e-8ce0-4e9d-87e5-abaa9e51eae2","Type":"ContainerDied","Data":"4bb59431bc2cf32f742ed0eb7e4fbadf3bed58a1859b3d836248c412325215ac"} Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.554425 4782 scope.go:117] "RemoveContainer" containerID="9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.554573 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.563042 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b46n\" (UniqueName: \"kubernetes.io/projected/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2-kube-api-access-4b46n\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.566246 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerStarted","Data":"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6"} Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.566280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerStarted","Data":"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6"} Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.566292 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerStarted","Data":"5a008c073eaf2c20a1d0c062b5a10c23049a20f02ac35786cbfdfb7b48f87de8"} Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.612200 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.659314 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.660516 4782 scope.go:117] "RemoveContainer" containerID="9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9" Nov 24 12:16:59 crc kubenswrapper[4782]: E1124 12:16:59.661640 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9\": container with ID starting with 9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9 not found: ID does not exist" containerID="9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.662211 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9"} err="failed to get container status \"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9\": rpc error: code = NotFound desc = could not find container \"9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9\": container with ID starting with 9962f18e962eee669c6ad870f3ef2ab5d6fb21b7a0ca56564ecca4827bba9ad9 not found: ID does not exist" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.678454 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: E1124 12:16:59.678969 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" containerName="kube-state-metrics" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.678986 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" containerName="kube-state-metrics" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.679227 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" containerName="kube-state-metrics" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.680100 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.682048 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.682410 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.684354 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.684300013 podStartE2EDuration="2.684300013s" podCreationTimestamp="2025-11-24 12:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:59.613328699 +0000 UTC m=+1268.857162468" watchObservedRunningTime="2025-11-24 12:16:59.684300013 +0000 UTC m=+1268.928133802" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.711977 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.871939 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.872196 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.872220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.872265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz622\" (UniqueName: \"kubernetes.io/projected/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-api-access-cz622\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.916543 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.974534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.974604 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.974646 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.974726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz622\" (UniqueName: \"kubernetes.io/projected/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-api-access-cz622\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.984752 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.985925 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.988925 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f389ec5-41d8-4afb-9df2-792618e38c30-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:16:59 crc kubenswrapper[4782]: I1124 12:16:59.996880 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz622\" (UniqueName: \"kubernetes.io/projected/6f389ec5-41d8-4afb-9df2-792618e38c30-kube-api-access-cz622\") pod \"kube-state-metrics-0\" (UID: \"6f389ec5-41d8-4afb-9df2-792618e38c30\") " pod="openstack/kube-state-metrics-0" Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.013097 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.411101 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.411433 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.577493 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.578847 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8769e2de-b84d-47e9-9917-4dddfc663732","Type":"ContainerStarted","Data":"5a9615f9b31ca6cf95f7360da35f57ee061a7b011c03ca3c2696f9449e26f13a"} Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.578888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8769e2de-b84d-47e9-9917-4dddfc663732","Type":"ContainerStarted","Data":"d361d979dd88c995250c3c9942d46886eaac1b31c7b4061a809403acd22460dc"} Nov 24 12:17:00 crc kubenswrapper[4782]: W1124 12:17:00.584644 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f389ec5_41d8_4afb_9df2_792618e38c30.slice/crio-feb300030d59466cba4ca9fe5ca50a04012d642e5f32eee1249e3a4be5fb416c WatchSource:0}: Error finding container feb300030d59466cba4ca9fe5ca50a04012d642e5f32eee1249e3a4be5fb416c: Status 404 returned error can't find the container with id feb300030d59466cba4ca9fe5ca50a04012d642e5f32eee1249e3a4be5fb416c Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.599664 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.599646025 podStartE2EDuration="2.599646025s" podCreationTimestamp="2025-11-24 12:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:00.596333355 +0000 UTC m=+1269.840167124" watchObservedRunningTime="2025-11-24 12:17:00.599646025 +0000 UTC m=+1269.843479794" Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.872785 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:17:00 crc kubenswrapper[4782]: I1124 12:17:00.873985 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.225820 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.226077 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-central-agent" containerID="cri-o://02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea" gracePeriod=30 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.226164 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="proxy-httpd" containerID="cri-o://c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b" gracePeriod=30 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.226185 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="sg-core" containerID="cri-o://2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6" gracePeriod=30 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.226217 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-notification-agent" containerID="cri-o://1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3" gracePeriod=30 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.504927 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c21319e-8ce0-4e9d-87e5-abaa9e51eae2" path="/var/lib/kubelet/pods/1c21319e-8ce0-4e9d-87e5-abaa9e51eae2/volumes" Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.600855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f389ec5-41d8-4afb-9df2-792618e38c30","Type":"ContainerStarted","Data":"cd7123778fad0266226f6072a740b27f5e86f270ccbfdddb94b13e35446a8982"} Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.600967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f389ec5-41d8-4afb-9df2-792618e38c30","Type":"ContainerStarted","Data":"feb300030d59466cba4ca9fe5ca50a04012d642e5f32eee1249e3a4be5fb416c"} Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.601893 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.604946 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerID="c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b" exitCode=0 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.605114 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerID="2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6" exitCode=2 Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.608564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerDied","Data":"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b"} Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.608679 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerDied","Data":"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6"} Nov 24 12:17:01 crc kubenswrapper[4782]: I1124 12:17:01.647627 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.292111996 podStartE2EDuration="2.64754316s" podCreationTimestamp="2025-11-24 12:16:59 +0000 UTC" firstStartedPulling="2025-11-24 12:17:00.588127233 +0000 UTC m=+1269.831961002" lastFinishedPulling="2025-11-24 12:17:00.943558397 +0000 UTC m=+1270.187392166" observedRunningTime="2025-11-24 12:17:01.623111428 +0000 UTC m=+1270.866945207" watchObservedRunningTime="2025-11-24 12:17:01.64754316 +0000 UTC m=+1270.891376929" Nov 24 12:17:02 crc kubenswrapper[4782]: I1124 12:17:02.695296 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerID="02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea" exitCode=0 Nov 24 12:17:02 crc kubenswrapper[4782]: I1124 12:17:02.695412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerDied","Data":"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea"} Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.074877 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095509 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095550 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095580 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095668 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095772 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2mz\" (UniqueName: \"kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.095817 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd\") pod \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\" (UID: \"c0de4cc4-f0b4-47fc-964d-f02027dd445d\") " Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.096814 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.104861 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts" (OuterVolumeSpecName: "scripts") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.105127 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.112216 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz" (OuterVolumeSpecName: "kube-api-access-xc2mz") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "kube-api-access-xc2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.198351 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2mz\" (UniqueName: \"kubernetes.io/projected/c0de4cc4-f0b4-47fc-964d-f02027dd445d-kube-api-access-xc2mz\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.198398 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.198410 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.198418 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0de4cc4-f0b4-47fc-964d-f02027dd445d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.200411 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.230122 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.250447 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data" (OuterVolumeSpecName: "config-data") pod "c0de4cc4-f0b4-47fc-964d-f02027dd445d" (UID: "c0de4cc4-f0b4-47fc-964d-f02027dd445d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.300663 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.300686 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.300695 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0de4cc4-f0b4-47fc-964d-f02027dd445d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.314969 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.713149 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerID="1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3" exitCode=0 Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.713213 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.713217 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerDied","Data":"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3"} Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.713258 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0de4cc4-f0b4-47fc-964d-f02027dd445d","Type":"ContainerDied","Data":"92bb8ab4f1db22bfbc4e50aa91a1ee0478f744a7f87ed83d456ed0a32104f5b0"} Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.713280 4782 scope.go:117] "RemoveContainer" containerID="c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.753613 4782 scope.go:117] "RemoveContainer" containerID="2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.825745 4782 scope.go:117] "RemoveContainer" containerID="1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.828512 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.843559 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.847344 4782 scope.go:117] "RemoveContainer" containerID="02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.855702 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.856117 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-central-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856134 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-central-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.856156 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-notification-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856163 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-notification-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.856186 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="sg-core" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856192 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="sg-core" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.856217 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="proxy-httpd" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856224 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="proxy-httpd" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856406 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="proxy-httpd" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856422 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-central-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856441 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="sg-core" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.856453 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" containerName="ceilometer-notification-agent" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.858250 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.870518 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.870751 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.870864 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.876567 4782 scope.go:117] "RemoveContainer" containerID="c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.877357 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b\": container with ID starting with c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b not found: ID does not exist" containerID="c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.877460 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b"} err="failed to get container status \"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b\": rpc error: code = NotFound desc = could not find container \"c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b\": container with ID starting with c004d31480038edd511c39431d4d8266a582b76874d02c52d0850be1d401e19b not found: ID does not exist" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.877480 4782 scope.go:117] "RemoveContainer" containerID="2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.878194 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6\": container with ID starting with 2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6 not found: ID does not exist" containerID="2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.878230 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6"} err="failed to get container status \"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6\": rpc error: code = NotFound desc = could not find container \"2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6\": container with ID starting with 2dc687950d4b3dce2d0d3a46bc4a8b2d2554c3174d7cef71e7de0ae0b85f52e6 not found: ID does not exist" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.878255 4782 scope.go:117] "RemoveContainer" containerID="1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.879344 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3\": container with ID starting with 1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3 not found: ID does not exist" containerID="1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.879385 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3"} err="failed to get container status \"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3\": rpc error: code = NotFound desc = could not find container \"1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3\": container with ID starting with 1a577f4a6882c76e43a42f97f252ac2f480e9d2e7ba167640704461221895fb3 not found: ID does not exist" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.879403 4782 scope.go:117] "RemoveContainer" containerID="02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea" Nov 24 12:17:04 crc kubenswrapper[4782]: E1124 12:17:04.881569 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea\": container with ID starting with 02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea not found: ID does not exist" containerID="02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.881605 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea"} err="failed to get container status \"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea\": rpc error: code = NotFound desc = could not find container \"02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea\": container with ID starting with 02db03a99629227bf25a510cf1d6c5079060376c0452075f4f64c5a7ede787ea not found: ID does not exist" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.887364 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911575 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911636 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911653 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911710 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911725 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911773 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:04 crc kubenswrapper[4782]: I1124 12:17:04.911903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49qp\" (UniqueName: \"kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.016009 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c49qp\" (UniqueName: \"kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.016265 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017249 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017317 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017351 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017396 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.017424 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.019258 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.020098 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.021300 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.022975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.023133 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.032546 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.032711 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.033568 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49qp\" (UniqueName: \"kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp\") pod \"ceilometer-0\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.177011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.501220 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0de4cc4-f0b4-47fc-964d-f02027dd445d" path="/var/lib/kubelet/pods/c0de4cc4-f0b4-47fc-964d-f02027dd445d/volumes" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.626069 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.722966 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerStarted","Data":"a38f3108c26c471aa9025577e967da4998da75263ba501a4f018ed59868f84a7"} Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.873099 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.873139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:17:05 crc kubenswrapper[4782]: I1124 12:17:05.917846 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 12:17:06 crc kubenswrapper[4782]: I1124 12:17:06.735354 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerStarted","Data":"06943e7445c8fc9d33afec240b6d57010302ad9ed8a821cd61294413344369be"} Nov 24 12:17:06 crc kubenswrapper[4782]: I1124 12:17:06.887646 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:06 crc kubenswrapper[4782]: I1124 12:17:06.887991 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:07 crc kubenswrapper[4782]: I1124 12:17:07.660678 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8684f6cd6d-mwlp6" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 12:17:07 crc kubenswrapper[4782]: I1124 12:17:07.661076 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:17:07 crc kubenswrapper[4782]: I1124 12:17:07.745022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerStarted","Data":"157f1d8891dabc18c763823ba464332795228ebdcf06fa73a32ff9185ca4f3fe"} Nov 24 12:17:08 crc kubenswrapper[4782]: I1124 12:17:08.196141 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:08 crc kubenswrapper[4782]: I1124 12:17:08.197609 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:08 crc kubenswrapper[4782]: I1124 12:17:08.756095 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerStarted","Data":"83d90465fa7997488aa04583a9e5579eb638b8083c921820fd982adbea32f53a"} Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.278531 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.278525 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.314343 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.344198 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.774920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerStarted","Data":"ea5a00eb9ed8e932299140d28f0dc9200abbc0b739e16d7649581a6a6559665d"} Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.798706 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.001844761 podStartE2EDuration="5.798688969s" podCreationTimestamp="2025-11-24 12:17:04 +0000 UTC" firstStartedPulling="2025-11-24 12:17:05.633767993 +0000 UTC m=+1274.877601772" lastFinishedPulling="2025-11-24 12:17:09.430612211 +0000 UTC m=+1278.674445980" observedRunningTime="2025-11-24 12:17:09.796036927 +0000 UTC m=+1279.039870706" watchObservedRunningTime="2025-11-24 12:17:09.798688969 +0000 UTC m=+1279.042522738" Nov 24 12:17:09 crc kubenswrapper[4782]: I1124 12:17:09.842364 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:17:10 crc kubenswrapper[4782]: I1124 12:17:10.032731 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:17:10 crc kubenswrapper[4782]: I1124 12:17:10.783119 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.687013 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.738151 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data\") pod \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.738400 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle\") pod \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.738445 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5tb\" (UniqueName: \"kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb\") pod \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\" (UID: \"da6d72d1-2b6f-4771-a6b8-fb12638a6920\") " Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.766641 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb" (OuterVolumeSpecName: "kube-api-access-sp5tb") pod "da6d72d1-2b6f-4771-a6b8-fb12638a6920" (UID: "da6d72d1-2b6f-4771-a6b8-fb12638a6920"). InnerVolumeSpecName "kube-api-access-sp5tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.770752 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da6d72d1-2b6f-4771-a6b8-fb12638a6920" (UID: "da6d72d1-2b6f-4771-a6b8-fb12638a6920"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.783685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data" (OuterVolumeSpecName: "config-data") pod "da6d72d1-2b6f-4771-a6b8-fb12638a6920" (UID: "da6d72d1-2b6f-4771-a6b8-fb12638a6920"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.838438 4782 generic.go:334] "Generic (PLEG): container finished" podID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" containerID="cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464" exitCode=137 Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.838488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da6d72d1-2b6f-4771-a6b8-fb12638a6920","Type":"ContainerDied","Data":"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464"} Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.838515 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.838541 4782 scope.go:117] "RemoveContainer" containerID="cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.838530 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da6d72d1-2b6f-4771-a6b8-fb12638a6920","Type":"ContainerDied","Data":"62e5b3bffe7bfcba73f9a446d8c33a9e270715425ebee9f02f11218ef6bb9622"} Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.839999 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.840023 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6d72d1-2b6f-4771-a6b8-fb12638a6920-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.840035 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5tb\" (UniqueName: \"kubernetes.io/projected/da6d72d1-2b6f-4771-a6b8-fb12638a6920-kube-api-access-sp5tb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.871534 4782 scope.go:117] "RemoveContainer" containerID="cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464" Nov 24 12:17:15 crc kubenswrapper[4782]: E1124 12:17:15.872085 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464\": container with ID starting with cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464 not found: ID does not exist" containerID="cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.872124 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464"} err="failed to get container status \"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464\": rpc error: code = NotFound desc = could not find container \"cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464\": container with ID starting with cefd498c1c474a39c049adc7c56e66234e850dc6e4c0d15925ff424a01bb6464 not found: ID does not exist" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.882891 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.887879 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.902151 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.912256 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.915596 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.937613 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:17:15 crc kubenswrapper[4782]: E1124 12:17:15.938071 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.938088 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.938460 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.939033 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.945337 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.945415 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.945441 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqzh\" (UniqueName: \"kubernetes.io/projected/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-kube-api-access-qdqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.945495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.945518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.946563 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.946771 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 12:17:15 crc kubenswrapper[4782]: I1124 12:17:15.946947 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.017839 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.046201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.046256 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.046277 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqzh\" (UniqueName: \"kubernetes.io/projected/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-kube-api-access-qdqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.046316 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.046340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.050538 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.050864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.050918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.052635 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.064694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqzh\" (UniqueName: \"kubernetes.io/projected/4154f325-2ba9-4e67-a59e-d5e71d9f8cd8-kube-api-access-qdqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.297443 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.828155 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:17:16 crc kubenswrapper[4782]: W1124 12:17:16.829479 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4154f325_2ba9_4e67_a59e_d5e71d9f8cd8.slice/crio-0485f075fabeed94757b2b1e964f3d484ae37fb0bc223fea82794d4b985dc7be WatchSource:0}: Error finding container 0485f075fabeed94757b2b1e964f3d484ae37fb0bc223fea82794d4b985dc7be: Status 404 returned error can't find the container with id 0485f075fabeed94757b2b1e964f3d484ae37fb0bc223fea82794d4b985dc7be Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.849220 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8","Type":"ContainerStarted","Data":"0485f075fabeed94757b2b1e964f3d484ae37fb0bc223fea82794d4b985dc7be"} Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.852572 4782 generic.go:334] "Generic (PLEG): container finished" podID="b6cd757b-7259-4caf-b928-2dc936c99028" containerID="64cbbdb567acf5e868c8f354beb70b099e03307cd06d11f8948a9b2d2ca6c089" exitCode=137 Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.853758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerDied","Data":"64cbbdb567acf5e868c8f354beb70b099e03307cd06d11f8948a9b2d2ca6c089"} Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.853790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8684f6cd6d-mwlp6" event={"ID":"b6cd757b-7259-4caf-b928-2dc936c99028","Type":"ContainerDied","Data":"53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe"} Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.853803 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53a36cf5d726392feba1958cf8a4324339bc3dc7f18b4396aad3c4c8a393f9fe" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.860479 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.932219 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.971988 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972054 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972202 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972265 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972300 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972364 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn4sp\" (UniqueName: \"kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.972429 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key\") pod \"b6cd757b-7259-4caf-b928-2dc936c99028\" (UID: \"b6cd757b-7259-4caf-b928-2dc936c99028\") " Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.973357 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs" (OuterVolumeSpecName: "logs") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.976515 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cd757b-7259-4caf-b928-2dc936c99028-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.985290 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp" (OuterVolumeSpecName: "kube-api-access-bn4sp") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "kube-api-access-bn4sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:16 crc kubenswrapper[4782]: I1124 12:17:16.998432 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.013986 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts" (OuterVolumeSpecName: "scripts") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.026856 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.035033 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data" (OuterVolumeSpecName: "config-data") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.072878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b6cd757b-7259-4caf-b928-2dc936c99028" (UID: "b6cd757b-7259-4caf-b928-2dc936c99028"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078779 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078803 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078813 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn4sp\" (UniqueName: \"kubernetes.io/projected/b6cd757b-7259-4caf-b928-2dc936c99028-kube-api-access-bn4sp\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078822 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078829 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6cd757b-7259-4caf-b928-2dc936c99028-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.078837 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cd757b-7259-4caf-b928-2dc936c99028-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.501775 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6d72d1-2b6f-4771-a6b8-fb12638a6920" path="/var/lib/kubelet/pods/da6d72d1-2b6f-4771-a6b8-fb12638a6920/volumes" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.862184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4154f325-2ba9-4e67-a59e-d5e71d9f8cd8","Type":"ContainerStarted","Data":"492185efcfb4d1fe987dbd5b9534f724db1f1a942b602c3648cdf64e9751614a"} Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.862246 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8684f6cd6d-mwlp6" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.880471 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.880447917 podStartE2EDuration="2.880447917s" podCreationTimestamp="2025-11-24 12:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:17.876674815 +0000 UTC m=+1287.120508584" watchObservedRunningTime="2025-11-24 12:17:17.880447917 +0000 UTC m=+1287.124281686" Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.901129 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:17:17 crc kubenswrapper[4782]: I1124 12:17:17.910535 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8684f6cd6d-mwlp6"] Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.203572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.204393 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.210518 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.211837 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.869835 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:17:18 crc kubenswrapper[4782]: I1124 12:17:18.873051 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.130413 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:17:19 crc kubenswrapper[4782]: E1124 12:17:19.130854 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.130877 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: E1124 12:17:19.130903 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.130911 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: E1124 12:17:19.130932 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon-log" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.130941 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon-log" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.131121 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.131138 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon-log" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.131150 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: E1124 12:17:19.131337 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.131351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.131597 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" containerName="horizon" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.137906 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.182489 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238052 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238138 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2djh\" (UniqueName: \"kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238241 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238328 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.238505 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340394 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340552 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2djh\" (UniqueName: \"kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340627 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.340641 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.341455 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.341943 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.342922 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.343733 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.347569 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.377287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2djh\" (UniqueName: \"kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh\") pod \"dnsmasq-dns-5c7b6c5df9-k5q8p\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.457055 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:19 crc kubenswrapper[4782]: I1124 12:17:19.524324 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd757b-7259-4caf-b928-2dc936c99028" path="/var/lib/kubelet/pods/b6cd757b-7259-4caf-b928-2dc936c99028/volumes" Nov 24 12:17:20 crc kubenswrapper[4782]: W1124 12:17:20.053946 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb193b3_10e9_491a_b0cf_4e7c00375198.slice/crio-2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad WatchSource:0}: Error finding container 2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad: Status 404 returned error can't find the container with id 2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad Nov 24 12:17:20 crc kubenswrapper[4782]: I1124 12:17:20.069545 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:17:20 crc kubenswrapper[4782]: I1124 12:17:20.889618 4782 generic.go:334] "Generic (PLEG): container finished" podID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerID="a3b3ecde0310e26419505ebce09b4ae6f94762663149b78913c8a0259c96dadf" exitCode=0 Nov 24 12:17:20 crc kubenswrapper[4782]: I1124 12:17:20.891300 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" event={"ID":"1eb193b3-10e9-491a-b0cf-4e7c00375198","Type":"ContainerDied","Data":"a3b3ecde0310e26419505ebce09b4ae6f94762663149b78913c8a0259c96dadf"} Nov 24 12:17:20 crc kubenswrapper[4782]: I1124 12:17:20.891474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" event={"ID":"1eb193b3-10e9-491a-b0cf-4e7c00375198","Type":"ContainerStarted","Data":"2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad"} Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.297663 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.868646 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.900954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" event={"ID":"1eb193b3-10e9-491a-b0cf-4e7c00375198","Type":"ContainerStarted","Data":"2cfdeb6bfcea7c353056f9158008d0301187252564f1aa239a37f21e37ca75e7"} Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.901107 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-log" containerID="cri-o://2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6" gracePeriod=30 Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.901335 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-api" containerID="cri-o://a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6" gracePeriod=30 Nov 24 12:17:21 crc kubenswrapper[4782]: I1124 12:17:21.941427 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" podStartSLOduration=2.941403696 podStartE2EDuration="2.941403696s" podCreationTimestamp="2025-11-24 12:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:21.929655607 +0000 UTC m=+1291.173489386" watchObservedRunningTime="2025-11-24 12:17:21.941403696 +0000 UTC m=+1291.185237475" Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.055125 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.055709 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="proxy-httpd" containerID="cri-o://ea5a00eb9ed8e932299140d28f0dc9200abbc0b739e16d7649581a6a6559665d" gracePeriod=30 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.055998 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="sg-core" containerID="cri-o://83d90465fa7997488aa04583a9e5579eb638b8083c921820fd982adbea32f53a" gracePeriod=30 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.056101 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-notification-agent" containerID="cri-o://157f1d8891dabc18c763823ba464332795228ebdcf06fa73a32ff9185ca4f3fe" gracePeriod=30 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.055664 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-central-agent" containerID="cri-o://06943e7445c8fc9d33afec240b6d57010302ad9ed8a821cd61294413344369be" gracePeriod=30 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.074724 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921225 4782 generic.go:334] "Generic (PLEG): container finished" podID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerID="ea5a00eb9ed8e932299140d28f0dc9200abbc0b739e16d7649581a6a6559665d" exitCode=0 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921571 4782 generic.go:334] "Generic (PLEG): container finished" podID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerID="83d90465fa7997488aa04583a9e5579eb638b8083c921820fd982adbea32f53a" exitCode=2 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921580 4782 generic.go:334] "Generic (PLEG): container finished" podID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerID="06943e7445c8fc9d33afec240b6d57010302ad9ed8a821cd61294413344369be" exitCode=0 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921352 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerDied","Data":"ea5a00eb9ed8e932299140d28f0dc9200abbc0b739e16d7649581a6a6559665d"} Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerDied","Data":"83d90465fa7997488aa04583a9e5579eb638b8083c921820fd982adbea32f53a"} Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.921688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerDied","Data":"06943e7445c8fc9d33afec240b6d57010302ad9ed8a821cd61294413344369be"} Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.925014 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerID="2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6" exitCode=143 Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.925111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerDied","Data":"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6"} Nov 24 12:17:22 crc kubenswrapper[4782]: I1124 12:17:22.925345 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:24 crc kubenswrapper[4782]: I1124 12:17:24.965085 4782 generic.go:334] "Generic (PLEG): container finished" podID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerID="157f1d8891dabc18c763823ba464332795228ebdcf06fa73a32ff9185ca4f3fe" exitCode=0 Nov 24 12:17:24 crc kubenswrapper[4782]: I1124 12:17:24.965411 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerDied","Data":"157f1d8891dabc18c763823ba464332795228ebdcf06fa73a32ff9185ca4f3fe"} Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.229195 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391489 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391569 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391610 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391645 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c49qp\" (UniqueName: \"kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.391843 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts\") pod \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\" (UID: \"4913fed0-c8fc-478a-8466-e2fdf5caf6ba\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.392084 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.392581 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.393052 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.398262 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts" (OuterVolumeSpecName: "scripts") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.398541 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp" (OuterVolumeSpecName: "kube-api-access-c49qp") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "kube-api-access-c49qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.430078 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.445160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.452262 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.494573 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.494802 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c49qp\" (UniqueName: \"kubernetes.io/projected/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-kube-api-access-c49qp\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.494890 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.494955 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.495019 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.507650 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.511463 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data" (OuterVolumeSpecName: "config-data") pod "4913fed0-c8fc-478a-8466-e2fdf5caf6ba" (UID: "4913fed0-c8fc-478a-8466-e2fdf5caf6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.596428 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle\") pod \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.596501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data\") pod \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.596600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9bmg\" (UniqueName: \"kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg\") pod \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.596718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs\") pod \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\" (UID: \"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5\") " Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.597270 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.597290 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4913fed0-c8fc-478a-8466-e2fdf5caf6ba-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.597360 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs" (OuterVolumeSpecName: "logs") pod "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" (UID: "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.619217 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg" (OuterVolumeSpecName: "kube-api-access-w9bmg") pod "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" (UID: "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5"). InnerVolumeSpecName "kube-api-access-w9bmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.638508 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data" (OuterVolumeSpecName: "config-data") pod "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" (UID: "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.647529 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" (UID: "8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.698461 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.698507 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9bmg\" (UniqueName: \"kubernetes.io/projected/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-kube-api-access-w9bmg\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.698521 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.698532 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.976044 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4913fed0-c8fc-478a-8466-e2fdf5caf6ba","Type":"ContainerDied","Data":"a38f3108c26c471aa9025577e967da4998da75263ba501a4f018ed59868f84a7"} Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.976108 4782 scope.go:117] "RemoveContainer" containerID="ea5a00eb9ed8e932299140d28f0dc9200abbc0b739e16d7649581a6a6559665d" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.976254 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.980231 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerID="a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6" exitCode=0 Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.980273 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerDied","Data":"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6"} Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.980298 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5","Type":"ContainerDied","Data":"5a008c073eaf2c20a1d0c062b5a10c23049a20f02ac35786cbfdfb7b48f87de8"} Nov 24 12:17:25 crc kubenswrapper[4782]: I1124 12:17:25.980351 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.021919 4782 scope.go:117] "RemoveContainer" containerID="83d90465fa7997488aa04583a9e5579eb638b8083c921820fd982adbea32f53a" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.054420 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.062077 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.063215 4782 scope.go:117] "RemoveContainer" containerID="157f1d8891dabc18c763823ba464332795228ebdcf06fa73a32ff9185ca4f3fe" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.079429 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.094468 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.106832 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118296 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-notification-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118330 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-notification-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118347 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="sg-core" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118353 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="sg-core" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118408 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-central-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118414 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-central-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118437 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-api" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118444 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-api" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118462 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-log" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118468 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-log" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.118491 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="proxy-httpd" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.118496 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="proxy-httpd" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119269 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-log" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119380 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" containerName="nova-api-api" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119398 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-central-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119408 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="ceilometer-notification-agent" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119424 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="proxy-httpd" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.119444 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" containerName="sg-core" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.133610 4782 scope.go:117] "RemoveContainer" containerID="06943e7445c8fc9d33afec240b6d57010302ad9ed8a821cd61294413344369be" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.175136 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.175277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.180133 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.180198 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.180517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.202887 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.224770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.234700 4782 scope.go:117] "RemoveContainer" containerID="a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.246155 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.246418 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.250350 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.263216 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.301931 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.318884 4782 scope.go:117] "RemoveContainer" containerID="2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.354044 4782 scope.go:117] "RemoveContainer" containerID="a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.354544 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6\": container with ID starting with a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6 not found: ID does not exist" containerID="a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.354592 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6"} err="failed to get container status \"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6\": rpc error: code = NotFound desc = could not find container \"a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6\": container with ID starting with a7d791d7208c54157ed3f434735ce495ccd5b4d9f405c2368eb8e8b9c3c072c6 not found: ID does not exist" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.354612 4782 scope.go:117] "RemoveContainer" containerID="2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6" Nov 24 12:17:26 crc kubenswrapper[4782]: E1124 12:17:26.355347 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6\": container with ID starting with 2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6 not found: ID does not exist" containerID="2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.355402 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6"} err="failed to get container status \"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6\": rpc error: code = NotFound desc = could not find container \"2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6\": container with ID starting with 2c0ea800d03676b92229665fcd41c026b4fc34899aff135ab028cb46aff63ad6 not found: ID does not exist" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.355891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.355959 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.355995 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-scripts\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356111 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-config-data\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356148 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356180 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmpqf\" (UniqueName: \"kubernetes.io/projected/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-kube-api-access-xmpqf\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356248 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356294 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356330 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr65t\" (UniqueName: \"kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356351 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.356385 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.357981 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.457966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-scripts\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-config-data\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmpqf\" (UniqueName: \"kubernetes.io/projected/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-kube-api-access-xmpqf\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458531 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr65t\" (UniqueName: \"kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458550 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458569 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458637 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.458658 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.459121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.459550 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.465078 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.466100 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-scripts\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.466397 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.466956 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.467858 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.467927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.468335 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-config-data\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.470657 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.470814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.479191 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.484025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr65t\" (UniqueName: \"kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t\") pod \"nova-api-0\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.484850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmpqf\" (UniqueName: \"kubernetes.io/projected/0a6e941b-a7bd-4365-88eb-5daaa2b590ab-kube-api-access-xmpqf\") pod \"ceilometer-0\" (UID: \"0a6e941b-a7bd-4365-88eb-5daaa2b590ab\") " pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.564093 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:26 crc kubenswrapper[4782]: I1124 12:17:26.621129 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.014020 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.099022 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.235841 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9x4kt"] Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.237158 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.242597 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.243906 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.265741 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9x4kt"] Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.313198 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.376181 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.376980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtlt\" (UniqueName: \"kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.377119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.377258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.478990 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.479036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtlt\" (UniqueName: \"kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.479075 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.479098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.482859 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.483871 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.487878 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.498269 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtlt\" (UniqueName: \"kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt\") pod \"nova-cell1-cell-mapping-9x4kt\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.503733 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4913fed0-c8fc-478a-8466-e2fdf5caf6ba" path="/var/lib/kubelet/pods/4913fed0-c8fc-478a-8466-e2fdf5caf6ba/volumes" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.504969 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5" path="/var/lib/kubelet/pods/8fa76cd5-9bfb-46c9-b5c2-6a59a81487d5/volumes" Nov 24 12:17:27 crc kubenswrapper[4782]: I1124 12:17:27.560261 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.022648 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a6e941b-a7bd-4365-88eb-5daaa2b590ab","Type":"ContainerStarted","Data":"01dff4e9211b11986efd5db6c10025f7b8f10aee6ba1f03c7954fb740cc09a62"} Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.027154 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerStarted","Data":"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03"} Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.027227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerStarted","Data":"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9"} Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.027244 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerStarted","Data":"0ef134b0bf14f684248ebacceec759cdc2171acde482d672d6cabec877aed5c7"} Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.064980 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9x4kt"] Nov 24 12:17:28 crc kubenswrapper[4782]: I1124 12:17:28.067696 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.067675841 podStartE2EDuration="2.067675841s" podCreationTimestamp="2025-11-24 12:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:28.050501429 +0000 UTC m=+1297.294335208" watchObservedRunningTime="2025-11-24 12:17:28.067675841 +0000 UTC m=+1297.311509620" Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.042740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9x4kt" event={"ID":"803de2c6-2262-4959-8c9f-afc4a2da0196","Type":"ContainerStarted","Data":"35c5577736e0ff83143a7d724fb03fde38db5cd2427e79526badfd1d61442aaf"} Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.043335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9x4kt" event={"ID":"803de2c6-2262-4959-8c9f-afc4a2da0196","Type":"ContainerStarted","Data":"4e319d746435978c21263993402ff6d0ea938dd4452e19822b65677bef25869f"} Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.055737 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a6e941b-a7bd-4365-88eb-5daaa2b590ab","Type":"ContainerStarted","Data":"bc72097f50abc42a4550bf7363fff05907aa45a8dbbcda6aba145233f3cfbb3a"} Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.056663 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a6e941b-a7bd-4365-88eb-5daaa2b590ab","Type":"ContainerStarted","Data":"c81da023ad039b4bd115dca9f23b1baf1a36104e0b204bfe48ce10aca62ee9de"} Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.078955 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9x4kt" podStartSLOduration=2.078935408 podStartE2EDuration="2.078935408s" podCreationTimestamp="2025-11-24 12:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:29.074606192 +0000 UTC m=+1298.318439961" watchObservedRunningTime="2025-11-24 12:17:29.078935408 +0000 UTC m=+1298.322769177" Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.459570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.570690 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:17:29 crc kubenswrapper[4782]: I1124 12:17:29.570921 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="dnsmasq-dns" containerID="cri-o://8a51cd1cfc8f64974b144de3611fdd6f8f2e54414fb6ae9ee29a44992a51af99" gracePeriod=10 Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.071824 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerID="8a51cd1cfc8f64974b144de3611fdd6f8f2e54414fb6ae9ee29a44992a51af99" exitCode=0 Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.072045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" event={"ID":"7f861c7e-382d-4918-a489-9f98dac4a11e","Type":"ContainerDied","Data":"8a51cd1cfc8f64974b144de3611fdd6f8f2e54414fb6ae9ee29a44992a51af99"} Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.079731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a6e941b-a7bd-4365-88eb-5daaa2b590ab","Type":"ContainerStarted","Data":"f74b646928306fbcdaa547108a5e48887424b81fe267edd6f04557b2e31b12a0"} Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.326767 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.410865 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.410930 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m8k9\" (UniqueName: \"kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434825 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434889 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434932 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.434992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config\") pod \"7f861c7e-382d-4918-a489-9f98dac4a11e\" (UID: \"7f861c7e-382d-4918-a489-9f98dac4a11e\") " Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.457941 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9" (OuterVolumeSpecName: "kube-api-access-6m8k9") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "kube-api-access-6m8k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.505180 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.512939 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.527878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.536995 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.537032 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.537044 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.537054 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m8k9\" (UniqueName: \"kubernetes.io/projected/7f861c7e-382d-4918-a489-9f98dac4a11e-kube-api-access-6m8k9\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.539470 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.541940 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config" (OuterVolumeSpecName: "config") pod "7f861c7e-382d-4918-a489-9f98dac4a11e" (UID: "7f861c7e-382d-4918-a489-9f98dac4a11e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.638589 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:30 crc kubenswrapper[4782]: I1124 12:17:30.638630 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f861c7e-382d-4918-a489-9f98dac4a11e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.088708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" event={"ID":"7f861c7e-382d-4918-a489-9f98dac4a11e","Type":"ContainerDied","Data":"79a7b8599dce9277e30e458c854bc2cd59975fb713e5d81078e688063299f928"} Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.089101 4782 scope.go:117] "RemoveContainer" containerID="8a51cd1cfc8f64974b144de3611fdd6f8f2e54414fb6ae9ee29a44992a51af99" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.089056 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-fd9hw" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.092617 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a6e941b-a7bd-4365-88eb-5daaa2b590ab","Type":"ContainerStarted","Data":"903dd14d6823f5f4ef43c0cd7e96f99f0ce3a5c0d709608e31cc584b53c3c570"} Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.092743 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.111908 4782 scope.go:117] "RemoveContainer" containerID="c7411f041626c76f4b03617b330f4c5764451438a6a523e1864559e99722bca4" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.124224 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8279237799999999 podStartE2EDuration="5.124204543s" podCreationTimestamp="2025-11-24 12:17:26 +0000 UTC" firstStartedPulling="2025-11-24 12:17:27.324574119 +0000 UTC m=+1296.568407888" lastFinishedPulling="2025-11-24 12:17:30.620854882 +0000 UTC m=+1299.864688651" observedRunningTime="2025-11-24 12:17:31.114084041 +0000 UTC m=+1300.357917820" watchObservedRunningTime="2025-11-24 12:17:31.124204543 +0000 UTC m=+1300.368038312" Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.159726 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.169700 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-fd9hw"] Nov 24 12:17:31 crc kubenswrapper[4782]: I1124 12:17:31.502064 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" path="/var/lib/kubelet/pods/7f861c7e-382d-4918-a489-9f98dac4a11e/volumes" Nov 24 12:17:35 crc kubenswrapper[4782]: I1124 12:17:35.129983 4782 generic.go:334] "Generic (PLEG): container finished" podID="803de2c6-2262-4959-8c9f-afc4a2da0196" containerID="35c5577736e0ff83143a7d724fb03fde38db5cd2427e79526badfd1d61442aaf" exitCode=0 Nov 24 12:17:35 crc kubenswrapper[4782]: I1124 12:17:35.130079 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9x4kt" event={"ID":"803de2c6-2262-4959-8c9f-afc4a2da0196","Type":"ContainerDied","Data":"35c5577736e0ff83143a7d724fb03fde38db5cd2427e79526badfd1d61442aaf"} Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.479142 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.564006 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle\") pod \"803de2c6-2262-4959-8c9f-afc4a2da0196\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.564133 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xtlt\" (UniqueName: \"kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt\") pod \"803de2c6-2262-4959-8c9f-afc4a2da0196\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.564231 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data\") pod \"803de2c6-2262-4959-8c9f-afc4a2da0196\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.564256 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts\") pod \"803de2c6-2262-4959-8c9f-afc4a2da0196\" (UID: \"803de2c6-2262-4959-8c9f-afc4a2da0196\") " Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.565929 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.566425 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.572080 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts" (OuterVolumeSpecName: "scripts") pod "803de2c6-2262-4959-8c9f-afc4a2da0196" (UID: "803de2c6-2262-4959-8c9f-afc4a2da0196"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.577097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt" (OuterVolumeSpecName: "kube-api-access-4xtlt") pod "803de2c6-2262-4959-8c9f-afc4a2da0196" (UID: "803de2c6-2262-4959-8c9f-afc4a2da0196"). InnerVolumeSpecName "kube-api-access-4xtlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.606899 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "803de2c6-2262-4959-8c9f-afc4a2da0196" (UID: "803de2c6-2262-4959-8c9f-afc4a2da0196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.608995 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data" (OuterVolumeSpecName: "config-data") pod "803de2c6-2262-4959-8c9f-afc4a2da0196" (UID: "803de2c6-2262-4959-8c9f-afc4a2da0196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.667229 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.667261 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xtlt\" (UniqueName: \"kubernetes.io/projected/803de2c6-2262-4959-8c9f-afc4a2da0196-kube-api-access-4xtlt\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.667271 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:36 crc kubenswrapper[4782]: I1124 12:17:36.667280 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803de2c6-2262-4959-8c9f-afc4a2da0196-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.149829 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9x4kt" Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.150507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9x4kt" event={"ID":"803de2c6-2262-4959-8c9f-afc4a2da0196","Type":"ContainerDied","Data":"4e319d746435978c21263993402ff6d0ea938dd4452e19822b65677bef25869f"} Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.150547 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e319d746435978c21263993402ff6d0ea938dd4452e19822b65677bef25869f" Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.349867 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.350104 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8769e2de-b84d-47e9-9917-4dddfc663732" containerName="nova-scheduler-scheduler" containerID="cri-o://5a9615f9b31ca6cf95f7360da35f57ee061a7b011c03ca3c2696f9449e26f13a" gracePeriod=30 Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.359650 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.425920 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.426380 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" containerID="cri-o://821519c3aaa22322fd1c6d244ce2270836c6fa8a17b130e5fefc22b8fd145fd4" gracePeriod=30 Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.426508 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" containerID="cri-o://28573096fb3dbc569157bd2b3ffa7f7645e8e6ee6f6f5fd30a5f825c108145b0" gracePeriod=30 Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.595596 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:37 crc kubenswrapper[4782]: I1124 12:17:37.595617 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:38 crc kubenswrapper[4782]: I1124 12:17:38.160310 4782 generic.go:334] "Generic (PLEG): container finished" podID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerID="821519c3aaa22322fd1c6d244ce2270836c6fa8a17b130e5fefc22b8fd145fd4" exitCode=143 Nov 24 12:17:38 crc kubenswrapper[4782]: I1124 12:17:38.160429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerDied","Data":"821519c3aaa22322fd1c6d244ce2270836c6fa8a17b130e5fefc22b8fd145fd4"} Nov 24 12:17:38 crc kubenswrapper[4782]: I1124 12:17:38.160792 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-log" containerID="cri-o://7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9" gracePeriod=30 Nov 24 12:17:38 crc kubenswrapper[4782]: I1124 12:17:38.160839 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-api" containerID="cri-o://af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03" gracePeriod=30 Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.171666 4782 generic.go:334] "Generic (PLEG): container finished" podID="8769e2de-b84d-47e9-9917-4dddfc663732" containerID="5a9615f9b31ca6cf95f7360da35f57ee061a7b011c03ca3c2696f9449e26f13a" exitCode=0 Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.171747 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8769e2de-b84d-47e9-9917-4dddfc663732","Type":"ContainerDied","Data":"5a9615f9b31ca6cf95f7360da35f57ee061a7b011c03ca3c2696f9449e26f13a"} Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.171792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8769e2de-b84d-47e9-9917-4dddfc663732","Type":"ContainerDied","Data":"d361d979dd88c995250c3c9942d46886eaac1b31c7b4061a809403acd22460dc"} Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.171808 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d361d979dd88c995250c3c9942d46886eaac1b31c7b4061a809403acd22460dc" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.174393 4782 generic.go:334] "Generic (PLEG): container finished" podID="5629c15e-a039-4589-b825-9ea034388d42" containerID="7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9" exitCode=143 Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.174429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerDied","Data":"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9"} Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.206325 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.325511 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle\") pod \"8769e2de-b84d-47e9-9917-4dddfc663732\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.325843 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9wnc\" (UniqueName: \"kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc\") pod \"8769e2de-b84d-47e9-9917-4dddfc663732\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.325870 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data\") pod \"8769e2de-b84d-47e9-9917-4dddfc663732\" (UID: \"8769e2de-b84d-47e9-9917-4dddfc663732\") " Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.332564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc" (OuterVolumeSpecName: "kube-api-access-t9wnc") pod "8769e2de-b84d-47e9-9917-4dddfc663732" (UID: "8769e2de-b84d-47e9-9917-4dddfc663732"). InnerVolumeSpecName "kube-api-access-t9wnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.355616 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8769e2de-b84d-47e9-9917-4dddfc663732" (UID: "8769e2de-b84d-47e9-9917-4dddfc663732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.361398 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data" (OuterVolumeSpecName: "config-data") pod "8769e2de-b84d-47e9-9917-4dddfc663732" (UID: "8769e2de-b84d-47e9-9917-4dddfc663732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.427790 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.427824 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9wnc\" (UniqueName: \"kubernetes.io/projected/8769e2de-b84d-47e9-9917-4dddfc663732-kube-api-access-t9wnc\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:39 crc kubenswrapper[4782]: I1124 12:17:39.427837 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8769e2de-b84d-47e9-9917-4dddfc663732-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.182951 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.206506 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.217475 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.226495 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4782]: E1124 12:17:40.226892 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803de2c6-2262-4959-8c9f-afc4a2da0196" containerName="nova-manage" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.226909 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="803de2c6-2262-4959-8c9f-afc4a2da0196" containerName="nova-manage" Nov 24 12:17:40 crc kubenswrapper[4782]: E1124 12:17:40.226929 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="dnsmasq-dns" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.226935 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="dnsmasq-dns" Nov 24 12:17:40 crc kubenswrapper[4782]: E1124 12:17:40.226943 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8769e2de-b84d-47e9-9917-4dddfc663732" containerName="nova-scheduler-scheduler" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.226949 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8769e2de-b84d-47e9-9917-4dddfc663732" containerName="nova-scheduler-scheduler" Nov 24 12:17:40 crc kubenswrapper[4782]: E1124 12:17:40.226960 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="init" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.226966 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="init" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.227142 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8769e2de-b84d-47e9-9917-4dddfc663732" containerName="nova-scheduler-scheduler" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.227165 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f861c7e-382d-4918-a489-9f98dac4a11e" containerName="dnsmasq-dns" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.227183 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="803de2c6-2262-4959-8c9f-afc4a2da0196" containerName="nova-manage" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.227767 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.229982 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.249127 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.348580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-config-data\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.348640 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.348721 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pktd\" (UniqueName: \"kubernetes.io/projected/c234628b-dc63-4176-b7d6-5506de5cd15b-kube-api-access-9pktd\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.450203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-config-data\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.450286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.450332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pktd\" (UniqueName: \"kubernetes.io/projected/c234628b-dc63-4176-b7d6-5506de5cd15b-kube-api-access-9pktd\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.456661 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.464086 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c234628b-dc63-4176-b7d6-5506de5cd15b-config-data\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.467894 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pktd\" (UniqueName: \"kubernetes.io/projected/c234628b-dc63-4176-b7d6-5506de5cd15b-kube-api-access-9pktd\") pod \"nova-scheduler-0\" (UID: \"c234628b-dc63-4176-b7d6-5506de5cd15b\") " pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.545600 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.873257 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": dial tcp 10.217.0.193:8775: connect: connection refused" Nov 24 12:17:40 crc kubenswrapper[4782]: I1124 12:17:40.873344 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": dial tcp 10.217.0.193:8775: connect: connection refused" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.067105 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.191794 4782 generic.go:334] "Generic (PLEG): container finished" podID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerID="28573096fb3dbc569157bd2b3ffa7f7645e8e6ee6f6f5fd30a5f825c108145b0" exitCode=0 Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.191887 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerDied","Data":"28573096fb3dbc569157bd2b3ffa7f7645e8e6ee6f6f5fd30a5f825c108145b0"} Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.194922 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c234628b-dc63-4176-b7d6-5506de5cd15b","Type":"ContainerStarted","Data":"62661430d52b1a44357cb490c66847e994be14d1765992b1571b5f6298173531"} Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.558348 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8769e2de-b84d-47e9-9917-4dddfc663732" path="/var/lib/kubelet/pods/8769e2de-b84d-47e9-9917-4dddfc663732/volumes" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.682098 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.792818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs\") pod \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.792865 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plnrb\" (UniqueName: \"kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb\") pod \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.792993 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs\") pod \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.793095 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle\") pod \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.793149 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data\") pod \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\" (UID: \"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2\") " Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.793579 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs" (OuterVolumeSpecName: "logs") pod "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" (UID: "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.810564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb" (OuterVolumeSpecName: "kube-api-access-plnrb") pod "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" (UID: "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2"). InnerVolumeSpecName "kube-api-access-plnrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.829113 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data" (OuterVolumeSpecName: "config-data") pod "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" (UID: "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.831510 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" (UID: "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.858821 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" (UID: "363ce1c2-f79b-4591-8a18-97e9a3e1f5c2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.895707 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.895746 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.895759 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.895773 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:41 crc kubenswrapper[4782]: I1124 12:17:41.895786 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plnrb\" (UniqueName: \"kubernetes.io/projected/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2-kube-api-access-plnrb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.211502 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c234628b-dc63-4176-b7d6-5506de5cd15b","Type":"ContainerStarted","Data":"176c669193ffaa8aeb48b1a8e1556a51bae0540f3edc2a5def58b7b8e1be6ddb"} Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.216258 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"363ce1c2-f79b-4591-8a18-97e9a3e1f5c2","Type":"ContainerDied","Data":"8047a6955c3b6baada14d23b03cc955e0b4b6c5b7cf18264eee4ada7246f6d68"} Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.216309 4782 scope.go:117] "RemoveContainer" containerID="28573096fb3dbc569157bd2b3ffa7f7645e8e6ee6f6f5fd30a5f825c108145b0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.216322 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.240312 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.240294427 podStartE2EDuration="2.240294427s" podCreationTimestamp="2025-11-24 12:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:42.229593639 +0000 UTC m=+1311.473427408" watchObservedRunningTime="2025-11-24 12:17:42.240294427 +0000 UTC m=+1311.484128196" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.250688 4782 scope.go:117] "RemoveContainer" containerID="821519c3aaa22322fd1c6d244ce2270836c6fa8a17b130e5fefc22b8fd145fd4" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.282519 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.294826 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.309515 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:42 crc kubenswrapper[4782]: E1124 12:17:42.310002 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.310021 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" Nov 24 12:17:42 crc kubenswrapper[4782]: E1124 12:17:42.310031 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.310041 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.310271 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-metadata" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.310302 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" containerName="nova-metadata-log" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.312624 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.322713 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.322912 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.337698 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.404928 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-config-data\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.405032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmxs\" (UniqueName: \"kubernetes.io/projected/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-kube-api-access-zfmxs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.405139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.405167 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.405286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-logs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.506388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-logs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.506694 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-config-data\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.506744 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmxs\" (UniqueName: \"kubernetes.io/projected/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-kube-api-access-zfmxs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.506802 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.506825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.507955 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-logs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.512058 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-config-data\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.512319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.512425 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.527622 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmxs\" (UniqueName: \"kubernetes.io/projected/ef6b2c28-7003-45fe-922e-40b6f5c2a43a-kube-api-access-zfmxs\") pod \"nova-metadata-0\" (UID: \"ef6b2c28-7003-45fe-922e-40b6f5c2a43a\") " pod="openstack/nova-metadata-0" Nov 24 12:17:42 crc kubenswrapper[4782]: I1124 12:17:42.631274 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:17:43 crc kubenswrapper[4782]: W1124 12:17:43.096318 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6b2c28_7003_45fe_922e_40b6f5c2a43a.slice/crio-085b7752a2f6065e1ceed3736c2a844448efa8215fdd807f002ed73ffd2da14e WatchSource:0}: Error finding container 085b7752a2f6065e1ceed3736c2a844448efa8215fdd807f002ed73ffd2da14e: Status 404 returned error can't find the container with id 085b7752a2f6065e1ceed3736c2a844448efa8215fdd807f002ed73ffd2da14e Nov 24 12:17:43 crc kubenswrapper[4782]: I1124 12:17:43.102964 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:17:43 crc kubenswrapper[4782]: I1124 12:17:43.237222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef6b2c28-7003-45fe-922e-40b6f5c2a43a","Type":"ContainerStarted","Data":"085b7752a2f6065e1ceed3736c2a844448efa8215fdd807f002ed73ffd2da14e"} Nov 24 12:17:43 crc kubenswrapper[4782]: I1124 12:17:43.503132 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363ce1c2-f79b-4591-8a18-97e9a3e1f5c2" path="/var/lib/kubelet/pods/363ce1c2-f79b-4591-8a18-97e9a3e1f5c2/volumes" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.133305 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243203 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243338 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243474 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr65t\" (UniqueName: \"kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243575 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.243617 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data\") pod \"5629c15e-a039-4589-b825-9ea034388d42\" (UID: \"5629c15e-a039-4589-b825-9ea034388d42\") " Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.244842 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs" (OuterVolumeSpecName: "logs") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.248698 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t" (OuterVolumeSpecName: "kube-api-access-nr65t") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "kube-api-access-nr65t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.264450 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef6b2c28-7003-45fe-922e-40b6f5c2a43a","Type":"ContainerStarted","Data":"6c7cfa190bf9646143d85c9131f613f6394fca51a61c9a07755f69f1b3836261"} Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.264704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef6b2c28-7003-45fe-922e-40b6f5c2a43a","Type":"ContainerStarted","Data":"1eec4bcddb93085bc375e314481ea344ef0708ebed8cb5167b0a9ec960fdb2f6"} Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.268210 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.268685 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerDied","Data":"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03"} Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.268730 4782 scope.go:117] "RemoveContainer" containerID="af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.268100 4782 generic.go:334] "Generic (PLEG): container finished" podID="5629c15e-a039-4589-b825-9ea034388d42" containerID="af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03" exitCode=0 Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.270725 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5629c15e-a039-4589-b825-9ea034388d42","Type":"ContainerDied","Data":"0ef134b0bf14f684248ebacceec759cdc2171acde482d672d6cabec877aed5c7"} Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.295981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data" (OuterVolumeSpecName: "config-data") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.302835 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302813217 podStartE2EDuration="2.302813217s" podCreationTimestamp="2025-11-24 12:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:44.287990709 +0000 UTC m=+1313.531824468" watchObservedRunningTime="2025-11-24 12:17:44.302813217 +0000 UTC m=+1313.546646986" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.306721 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.315511 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.318028 4782 scope.go:117] "RemoveContainer" containerID="7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.321632 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5629c15e-a039-4589-b825-9ea034388d42" (UID: "5629c15e-a039-4589-b825-9ea034388d42"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.334456 4782 scope.go:117] "RemoveContainer" containerID="af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03" Nov 24 12:17:44 crc kubenswrapper[4782]: E1124 12:17:44.334985 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03\": container with ID starting with af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03 not found: ID does not exist" containerID="af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.335019 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03"} err="failed to get container status \"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03\": rpc error: code = NotFound desc = could not find container \"af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03\": container with ID starting with af54c15197edec992bb40b7ee1291ef4cb805e0f8cecc9608fa78ed52fac2b03 not found: ID does not exist" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.335041 4782 scope.go:117] "RemoveContainer" containerID="7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9" Nov 24 12:17:44 crc kubenswrapper[4782]: E1124 12:17:44.335694 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9\": container with ID starting with 7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9 not found: ID does not exist" containerID="7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.335734 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9"} err="failed to get container status \"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9\": rpc error: code = NotFound desc = could not find container \"7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9\": container with ID starting with 7456274895fb113458f9495e99b8e9a2ab7365805e1938edda1964e464e98ec9 not found: ID does not exist" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346775 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346800 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr65t\" (UniqueName: \"kubernetes.io/projected/5629c15e-a039-4589-b825-9ea034388d42-kube-api-access-nr65t\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346809 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5629c15e-a039-4589-b825-9ea034388d42-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346817 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346825 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.346833 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629c15e-a039-4589-b825-9ea034388d42-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.604115 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.617545 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.635896 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:44 crc kubenswrapper[4782]: E1124 12:17:44.636363 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-log" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.636406 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-log" Nov 24 12:17:44 crc kubenswrapper[4782]: E1124 12:17:44.636437 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-api" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.636446 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-api" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.636728 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-log" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.636763 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5629c15e-a039-4589-b825-9ea034388d42" containerName="nova-api-api" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.637770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.641804 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.642013 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.642070 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.646789 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrngr\" (UniqueName: \"kubernetes.io/projected/39c39c96-99d5-4e76-9c99-20d1310fe1ac-kube-api-access-wrngr\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-config-data\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39c39c96-99d5-4e76-9c99-20d1310fe1ac-logs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653634 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.653700 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-config-data\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39c39c96-99d5-4e76-9c99-20d1310fe1ac-logs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754738 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.754940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrngr\" (UniqueName: \"kubernetes.io/projected/39c39c96-99d5-4e76-9c99-20d1310fe1ac-kube-api-access-wrngr\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.763595 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39c39c96-99d5-4e76-9c99-20d1310fe1ac-logs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.767020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.768585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-config-data\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.768698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.769364 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c39c96-99d5-4e76-9c99-20d1310fe1ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.783561 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrngr\" (UniqueName: \"kubernetes.io/projected/39c39c96-99d5-4e76-9c99-20d1310fe1ac-kube-api-access-wrngr\") pod \"nova-api-0\" (UID: \"39c39c96-99d5-4e76-9c99-20d1310fe1ac\") " pod="openstack/nova-api-0" Nov 24 12:17:44 crc kubenswrapper[4782]: I1124 12:17:44.962139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:17:45 crc kubenswrapper[4782]: I1124 12:17:45.433943 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:17:45 crc kubenswrapper[4782]: I1124 12:17:45.502365 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5629c15e-a039-4589-b825-9ea034388d42" path="/var/lib/kubelet/pods/5629c15e-a039-4589-b825-9ea034388d42/volumes" Nov 24 12:17:45 crc kubenswrapper[4782]: I1124 12:17:45.546997 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:17:46 crc kubenswrapper[4782]: I1124 12:17:46.290341 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39c39c96-99d5-4e76-9c99-20d1310fe1ac","Type":"ContainerStarted","Data":"aaf479ee451bd2eb0eb3852eaa26e71237584a79d41d4c83a75f1fe93446f036"} Nov 24 12:17:46 crc kubenswrapper[4782]: I1124 12:17:46.290676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39c39c96-99d5-4e76-9c99-20d1310fe1ac","Type":"ContainerStarted","Data":"d192bd576e0e7361e1d899fcacf0475b5a8ed0dde85b8fa15610130494327f96"} Nov 24 12:17:46 crc kubenswrapper[4782]: I1124 12:17:46.290688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39c39c96-99d5-4e76-9c99-20d1310fe1ac","Type":"ContainerStarted","Data":"0fbb7050d18c614416c86b3ad0c4799c80546bff06775241909b1be50ab63128"} Nov 24 12:17:46 crc kubenswrapper[4782]: I1124 12:17:46.319218 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.319192795 podStartE2EDuration="2.319192795s" podCreationTimestamp="2025-11-24 12:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:46.308667572 +0000 UTC m=+1315.552501361" watchObservedRunningTime="2025-11-24 12:17:46.319192795 +0000 UTC m=+1315.563026564" Nov 24 12:17:47 crc kubenswrapper[4782]: I1124 12:17:47.633504 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:17:47 crc kubenswrapper[4782]: I1124 12:17:47.634740 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:17:50 crc kubenswrapper[4782]: I1124 12:17:50.546789 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:17:50 crc kubenswrapper[4782]: I1124 12:17:50.576654 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:17:51 crc kubenswrapper[4782]: I1124 12:17:51.360163 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:17:52 crc kubenswrapper[4782]: I1124 12:17:52.632604 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:17:52 crc kubenswrapper[4782]: I1124 12:17:52.632659 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:17:53 crc kubenswrapper[4782]: I1124 12:17:53.646273 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef6b2c28-7003-45fe-922e-40b6f5c2a43a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:53 crc kubenswrapper[4782]: I1124 12:17:53.646347 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef6b2c28-7003-45fe-922e-40b6f5c2a43a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:54 crc kubenswrapper[4782]: I1124 12:17:54.962744 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:54 crc kubenswrapper[4782]: I1124 12:17:54.963003 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:17:55 crc kubenswrapper[4782]: I1124 12:17:55.976606 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="39c39c96-99d5-4e76-9c99-20d1310fe1ac" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:55 crc kubenswrapper[4782]: I1124 12:17:55.976654 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="39c39c96-99d5-4e76-9c99-20d1310fe1ac" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:17:56 crc kubenswrapper[4782]: I1124 12:17:56.630065 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:18:00 crc kubenswrapper[4782]: I1124 12:18:00.410653 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:18:00 crc kubenswrapper[4782]: I1124 12:18:00.411040 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:18:00 crc kubenswrapper[4782]: I1124 12:18:00.411097 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:18:00 crc kubenswrapper[4782]: I1124 12:18:00.412227 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:18:00 crc kubenswrapper[4782]: I1124 12:18:00.412549 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095" gracePeriod=600 Nov 24 12:18:01 crc kubenswrapper[4782]: I1124 12:18:01.429761 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095" exitCode=0 Nov 24 12:18:01 crc kubenswrapper[4782]: I1124 12:18:01.429965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095"} Nov 24 12:18:01 crc kubenswrapper[4782]: I1124 12:18:01.430329 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac"} Nov 24 12:18:01 crc kubenswrapper[4782]: I1124 12:18:01.430353 4782 scope.go:117] "RemoveContainer" containerID="b6c7ce8c7383e549b268b473ebff145c305170441de464250aa04d4d9e063e16" Nov 24 12:18:02 crc kubenswrapper[4782]: I1124 12:18:02.637541 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:18:02 crc kubenswrapper[4782]: I1124 12:18:02.637879 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:18:02 crc kubenswrapper[4782]: I1124 12:18:02.644397 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:18:02 crc kubenswrapper[4782]: I1124 12:18:02.645322 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.970474 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.971430 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.971768 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.971826 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.980022 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:18:04 crc kubenswrapper[4782]: I1124 12:18:04.980074 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:18:13 crc kubenswrapper[4782]: I1124 12:18:13.241751 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:14 crc kubenswrapper[4782]: I1124 12:18:14.060074 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:18 crc kubenswrapper[4782]: I1124 12:18:18.876420 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="rabbitmq" containerID="cri-o://2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9" gracePeriod=604796 Nov 24 12:18:19 crc kubenswrapper[4782]: I1124 12:18:19.021161 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="rabbitmq" containerID="cri-o://e41d3a35f61e13b3a4eb91c530f3ac601e2353c3e2f73b65fef7fbdf005abbd6" gracePeriod=604795 Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.519247 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.530835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.533495 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.631869 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.631915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.631958 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.631981 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632041 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632060 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632125 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzfz\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632150 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632184 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie\") pod \"819def2d-6f25-42ca-91f6-6951b7b97549\" (UID: \"819def2d-6f25-42ca-91f6-6951b7b97549\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.632523 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.633983 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.638665 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.652301 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz" (OuterVolumeSpecName: "kube-api-access-kmzfz") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "kube-api-access-kmzfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.661976 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.662117 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info" (OuterVolumeSpecName: "pod-info") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.670777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.680968 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.686901 4782 generic.go:334] "Generic (PLEG): container finished" podID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerID="e41d3a35f61e13b3a4eb91c530f3ac601e2353c3e2f73b65fef7fbdf005abbd6" exitCode=0 Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.687024 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerDied","Data":"e41d3a35f61e13b3a4eb91c530f3ac601e2353c3e2f73b65fef7fbdf005abbd6"} Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.689247 4782 generic.go:334] "Generic (PLEG): container finished" podID="819def2d-6f25-42ca-91f6-6951b7b97549" containerID="2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9" exitCode=0 Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.689275 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerDied","Data":"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9"} Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.689291 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"819def2d-6f25-42ca-91f6-6951b7b97549","Type":"ContainerDied","Data":"e1491967029859c779d4c872d5fca7910891c19a55404d892fb7230343e75207"} Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.689311 4782 scope.go:117] "RemoveContainer" containerID="2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.689399 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.709182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data" (OuterVolumeSpecName: "config-data") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738299 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/819def2d-6f25-42ca-91f6-6951b7b97549-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738335 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738348 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738356 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/819def2d-6f25-42ca-91f6-6951b7b97549-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738420 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738431 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738443 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzfz\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-kube-api-access-kmzfz\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.738455 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.752126 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf" (OuterVolumeSpecName: "server-conf") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.764278 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.766313 4782 scope.go:117] "RemoveContainer" containerID="152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.790172 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.807594 4782 scope.go:117] "RemoveContainer" containerID="2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9" Nov 24 12:18:25 crc kubenswrapper[4782]: E1124 12:18:25.819765 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9\": container with ID starting with 2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9 not found: ID does not exist" containerID="2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.820144 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9"} err="failed to get container status \"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9\": rpc error: code = NotFound desc = could not find container \"2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9\": container with ID starting with 2cb5d23bf7395cffe128683ece71717371595f518b4c09c752ccbaa0f06eaff9 not found: ID does not exist" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.820250 4782 scope.go:117] "RemoveContainer" containerID="152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f" Nov 24 12:18:25 crc kubenswrapper[4782]: E1124 12:18:25.821643 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f\": container with ID starting with 152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f not found: ID does not exist" containerID="152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.821709 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f"} err="failed to get container status \"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f\": rpc error: code = NotFound desc = could not find container \"152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f\": container with ID starting with 152b0c742252b1615f85f0d68a7871badeb66c3e42e169dc09c1f0a742a3b33f not found: ID does not exist" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.846768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.846853 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.846900 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.846956 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847004 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847044 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fth46\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847112 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847166 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847201 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd\") pod \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\" (UID: \"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb\") " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847684 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/819def2d-6f25-42ca-91f6-6951b7b97549-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.847714 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.860732 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.862567 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.863142 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.872332 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.873613 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46" (OuterVolumeSpecName: "kube-api-access-fth46") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "kube-api-access-fth46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.882192 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info" (OuterVolumeSpecName: "pod-info") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.890692 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.901010 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.901071 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data" (OuterVolumeSpecName: "config-data") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.944525 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "819def2d-6f25-42ca-91f6-6951b7b97549" (UID: "819def2d-6f25-42ca-91f6-6951b7b97549"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960544 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960590 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960606 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/819def2d-6f25-42ca-91f6-6951b7b97549-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960636 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960650 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960662 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fth46\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-kube-api-access-fth46\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960673 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960683 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960694 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:25 crc kubenswrapper[4782]: I1124 12:18:25.960704 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.003394 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.020558 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf" (OuterVolumeSpecName: "server-conf") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.064903 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.065135 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.068288 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.085163 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.105846 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: E1124 12:18:26.106290 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106305 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: E1124 12:18:26.106316 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="setup-container" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106323 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="setup-container" Nov 24 12:18:26 crc kubenswrapper[4782]: E1124 12:18:26.106338 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106344 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: E1124 12:18:26.106392 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="setup-container" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106400 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="setup-container" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106568 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.106583 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" containerName="rabbitmq" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.107553 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.112800 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113058 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113312 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2bbf4" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113434 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113526 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.113638 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.117436 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.117769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" (UID: "76bab5be-7cad-4dba-a4f4-bd53ab7f53fb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.166790 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268463 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268541 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39483c87-eb4a-4adf-81de-ae60ec596fe8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268591 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268641 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bf5\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-kube-api-access-l5bf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268658 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39483c87-eb4a-4adf-81de-ae60ec596fe8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268746 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.268769 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.370678 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bf5\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-kube-api-access-l5bf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.370941 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39483c87-eb4a-4adf-81de-ae60ec596fe8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371144 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371586 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39483c87-eb4a-4adf-81de-ae60ec596fe8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371833 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.371916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.372817 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.373071 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.373445 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.373743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.374280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.375635 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39483c87-eb4a-4adf-81de-ae60ec596fe8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.377744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39483c87-eb4a-4adf-81de-ae60ec596fe8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.379185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39483c87-eb4a-4adf-81de-ae60ec596fe8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.379221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.384282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.395049 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bf5\" (UniqueName: \"kubernetes.io/projected/39483c87-eb4a-4adf-81de-ae60ec596fe8-kube-api-access-l5bf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.422925 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39483c87-eb4a-4adf-81de-ae60ec596fe8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.534953 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.712700 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76bab5be-7cad-4dba-a4f4-bd53ab7f53fb","Type":"ContainerDied","Data":"14cd36d35e31d53754bffb89bc3e2bf5eed690da14099498027c4b8f74944e03"} Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.713151 4782 scope.go:117] "RemoveContainer" containerID="e41d3a35f61e13b3a4eb91c530f3ac601e2353c3e2f73b65fef7fbdf005abbd6" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.713338 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.785032 4782 scope.go:117] "RemoveContainer" containerID="2b7a8ecd6eae3c7f121e653b6d11687117e4f39b41b193458a93c02b4f52a8da" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.789441 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.826020 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.837435 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.839697 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.842874 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.843219 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.843551 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.844593 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dd7fl" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.845109 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.845338 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.845599 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.858264 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986000 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986063 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986151 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986168 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986185 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mm75\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-kube-api-access-7mm75\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986217 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986231 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:26 crc kubenswrapper[4782]: I1124 12:18:26.986442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088594 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088629 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088645 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mm75\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-kube-api-access-7mm75\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088702 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088774 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088803 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.088829 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.089989 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.090255 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.096231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.096989 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.097308 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.097409 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.097409 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.099757 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.102359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.114624 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.117626 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mm75\" (UniqueName: \"kubernetes.io/projected/63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9-kube-api-access-7mm75\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.127707 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.171479 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.186481 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.505000 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76bab5be-7cad-4dba-a4f4-bd53ab7f53fb" path="/var/lib/kubelet/pods/76bab5be-7cad-4dba-a4f4-bd53ab7f53fb/volumes" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.506354 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="819def2d-6f25-42ca-91f6-6951b7b97549" path="/var/lib/kubelet/pods/819def2d-6f25-42ca-91f6-6951b7b97549/volumes" Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.685605 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.722831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39483c87-eb4a-4adf-81de-ae60ec596fe8","Type":"ContainerStarted","Data":"98cf139422d158bb950b2ccedf13b4dd1e62288248b1b3a6fc091e6c1af65b37"} Nov 24 12:18:27 crc kubenswrapper[4782]: I1124 12:18:27.724108 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9","Type":"ContainerStarted","Data":"8b044727bfca589a020e08689bbd74037775ac0ea79fa50d8cb4a540842bc7c7"} Nov 24 12:18:28 crc kubenswrapper[4782]: I1124 12:18:28.736135 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39483c87-eb4a-4adf-81de-ae60ec596fe8","Type":"ContainerStarted","Data":"55bbd542f7aca933fd2da98e90a0ca6fa9e0dd4ebcfa175280d8fe26f5cbe8a5"} Nov 24 12:18:28 crc kubenswrapper[4782]: I1124 12:18:28.857879 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:28 crc kubenswrapper[4782]: I1124 12:18:28.859781 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:28 crc kubenswrapper[4782]: I1124 12:18:28.862292 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 12:18:28 crc kubenswrapper[4782]: I1124 12:18:28.879358 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.029833 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030181 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030210 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6vd\" (UniqueName: \"kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030543 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.030741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.132811 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6vd\" (UniqueName: \"kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.132894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.133027 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.133139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.133276 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.133358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.133576 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.135227 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.135274 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.135844 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.136286 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.136468 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.137076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.156328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6vd\" (UniqueName: \"kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd\") pod \"dnsmasq-dns-5576978c7c-fcr9c\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.188730 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.627806 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.772913 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" event={"ID":"a9dc81d7-8424-41df-bbcb-f3da0637b38e","Type":"ContainerStarted","Data":"bad1077e3814dfb961bc5ea46385a7d1243ec9360a0d270940b3f25314553520"} Nov 24 12:18:29 crc kubenswrapper[4782]: I1124 12:18:29.777070 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9","Type":"ContainerStarted","Data":"8b52cac322d06695bdfc858ba9e9d4dcf2d828d11e1ebdc79deaf8b909683e19"} Nov 24 12:18:30 crc kubenswrapper[4782]: I1124 12:18:30.792271 4782 generic.go:334] "Generic (PLEG): container finished" podID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerID="9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6" exitCode=0 Nov 24 12:18:30 crc kubenswrapper[4782]: I1124 12:18:30.793569 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" event={"ID":"a9dc81d7-8424-41df-bbcb-f3da0637b38e","Type":"ContainerDied","Data":"9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6"} Nov 24 12:18:31 crc kubenswrapper[4782]: I1124 12:18:31.802651 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" event={"ID":"a9dc81d7-8424-41df-bbcb-f3da0637b38e","Type":"ContainerStarted","Data":"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94"} Nov 24 12:18:31 crc kubenswrapper[4782]: I1124 12:18:31.803662 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:31 crc kubenswrapper[4782]: I1124 12:18:31.831951 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" podStartSLOduration=3.831915436 podStartE2EDuration="3.831915436s" podCreationTimestamp="2025-11-24 12:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:31.822956045 +0000 UTC m=+1361.066789814" watchObservedRunningTime="2025-11-24 12:18:31.831915436 +0000 UTC m=+1361.075749195" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.190491 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.289805 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.290457 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="dnsmasq-dns" containerID="cri-o://2cfdeb6bfcea7c353056f9158008d0301187252564f1aa239a37f21e37ca75e7" gracePeriod=10 Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.458504 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.538815 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-qkj56"] Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.550512 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.567357 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-qkj56"] Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b2h5\" (UniqueName: \"kubernetes.io/projected/77f4f46d-6156-43bb-b49d-6371cb8921c1-kube-api-access-6b2h5\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635657 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635682 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635710 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-config\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.635828 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.737512 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.737839 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.737930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b2h5\" (UniqueName: \"kubernetes.io/projected/77f4f46d-6156-43bb-b49d-6371cb8921c1-kube-api-access-6b2h5\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.737960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.737988 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.738024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-config\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.738061 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.739034 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.739621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.739809 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.740288 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.740595 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-config\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.741322 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77f4f46d-6156-43bb-b49d-6371cb8921c1-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.774179 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b2h5\" (UniqueName: \"kubernetes.io/projected/77f4f46d-6156-43bb-b49d-6371cb8921c1-kube-api-access-6b2h5\") pod \"dnsmasq-dns-56f7ccd8f7-qkj56\" (UID: \"77f4f46d-6156-43bb-b49d-6371cb8921c1\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.879662 4782 generic.go:334] "Generic (PLEG): container finished" podID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerID="2cfdeb6bfcea7c353056f9158008d0301187252564f1aa239a37f21e37ca75e7" exitCode=0 Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.879735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" event={"ID":"1eb193b3-10e9-491a-b0cf-4e7c00375198","Type":"ContainerDied","Data":"2cfdeb6bfcea7c353056f9158008d0301187252564f1aa239a37f21e37ca75e7"} Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.879799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" event={"ID":"1eb193b3-10e9-491a-b0cf-4e7c00375198","Type":"ContainerDied","Data":"2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad"} Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.879817 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc363102b5df6b89b6816f213660646e16827879922caadee9c088d551bb3ad" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.893855 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:39 crc kubenswrapper[4782]: I1124 12:18:39.923888 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.050749 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2djh\" (UniqueName: \"kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.050817 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.050938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.050969 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.051024 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.051078 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.067342 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh" (OuterVolumeSpecName: "kube-api-access-n2djh") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "kube-api-access-n2djh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.121979 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.122804 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config" (OuterVolumeSpecName: "config") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.148507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.151955 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.152588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") pod \"1eb193b3-10e9-491a-b0cf-4e7c00375198\" (UID: \"1eb193b3-10e9-491a-b0cf-4e7c00375198\") " Nov 24 12:18:40 crc kubenswrapper[4782]: W1124 12:18:40.152724 4782 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1eb193b3-10e9-491a-b0cf-4e7c00375198/volumes/kubernetes.io~configmap/dns-swift-storage-0 Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.152739 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.153113 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.153125 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2djh\" (UniqueName: \"kubernetes.io/projected/1eb193b3-10e9-491a-b0cf-4e7c00375198-kube-api-access-n2djh\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.153138 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.153146 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.153155 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.169876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1eb193b3-10e9-491a-b0cf-4e7c00375198" (UID: "1eb193b3-10e9-491a-b0cf-4e7c00375198"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.257340 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb193b3-10e9-491a-b0cf-4e7c00375198-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.491507 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-qkj56"] Nov 24 12:18:40 crc kubenswrapper[4782]: W1124 12:18:40.494265 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77f4f46d_6156_43bb_b49d_6371cb8921c1.slice/crio-7e4626d27c88edc75dbf868a6c2a09faa93109a132d9c126211778a6604573b7 WatchSource:0}: Error finding container 7e4626d27c88edc75dbf868a6c2a09faa93109a132d9c126211778a6604573b7: Status 404 returned error can't find the container with id 7e4626d27c88edc75dbf868a6c2a09faa93109a132d9c126211778a6604573b7 Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.889931 4782 generic.go:334] "Generic (PLEG): container finished" podID="77f4f46d-6156-43bb-b49d-6371cb8921c1" containerID="fc6f54b08a4f80d5d36728a2345219c248d17d4c673035a84357f2eaf64efd01" exitCode=0 Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.891568 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-k5q8p" Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.891495 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" event={"ID":"77f4f46d-6156-43bb-b49d-6371cb8921c1","Type":"ContainerDied","Data":"fc6f54b08a4f80d5d36728a2345219c248d17d4c673035a84357f2eaf64efd01"} Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.891706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" event={"ID":"77f4f46d-6156-43bb-b49d-6371cb8921c1","Type":"ContainerStarted","Data":"7e4626d27c88edc75dbf868a6c2a09faa93109a132d9c126211778a6604573b7"} Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.937933 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:18:40 crc kubenswrapper[4782]: I1124 12:18:40.947501 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-k5q8p"] Nov 24 12:18:41 crc kubenswrapper[4782]: I1124 12:18:41.500992 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" path="/var/lib/kubelet/pods/1eb193b3-10e9-491a-b0cf-4e7c00375198/volumes" Nov 24 12:18:41 crc kubenswrapper[4782]: I1124 12:18:41.904277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" event={"ID":"77f4f46d-6156-43bb-b49d-6371cb8921c1","Type":"ContainerStarted","Data":"f64897592dc7c141010d2b53b168be6a6ea1a72156dbda92465f5ba4710a76cf"} Nov 24 12:18:41 crc kubenswrapper[4782]: I1124 12:18:41.904535 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:41 crc kubenswrapper[4782]: I1124 12:18:41.933073 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" podStartSLOduration=2.933052581 podStartE2EDuration="2.933052581s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:41.926804633 +0000 UTC m=+1371.170638452" watchObservedRunningTime="2025-11-24 12:18:41.933052581 +0000 UTC m=+1371.176886350" Nov 24 12:18:49 crc kubenswrapper[4782]: I1124 12:18:49.895546 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56f7ccd8f7-qkj56" Nov 24 12:18:49 crc kubenswrapper[4782]: I1124 12:18:49.978981 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:49 crc kubenswrapper[4782]: I1124 12:18:49.979231 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="dnsmasq-dns" containerID="cri-o://f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94" gracePeriod=10 Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.475483 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.647892 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.647951 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.647980 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.648016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.648041 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6vd\" (UniqueName: \"kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.648065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.648114 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb\") pod \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\" (UID: \"a9dc81d7-8424-41df-bbcb-f3da0637b38e\") " Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.680039 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd" (OuterVolumeSpecName: "kube-api-access-2m6vd") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "kube-api-access-2m6vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.730508 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config" (OuterVolumeSpecName: "config") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.750923 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.750964 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6vd\" (UniqueName: \"kubernetes.io/projected/a9dc81d7-8424-41df-bbcb-f3da0637b38e-kube-api-access-2m6vd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.782271 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.795108 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.804446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.804876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.816984 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9dc81d7-8424-41df-bbcb-f3da0637b38e" (UID: "a9dc81d7-8424-41df-bbcb-f3da0637b38e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.852138 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.852171 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.852184 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.852192 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.852201 4782 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9dc81d7-8424-41df-bbcb-f3da0637b38e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.985162 4782 generic.go:334] "Generic (PLEG): container finished" podID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerID="f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94" exitCode=0 Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.985202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" event={"ID":"a9dc81d7-8424-41df-bbcb-f3da0637b38e","Type":"ContainerDied","Data":"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94"} Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.985232 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" event={"ID":"a9dc81d7-8424-41df-bbcb-f3da0637b38e","Type":"ContainerDied","Data":"bad1077e3814dfb961bc5ea46385a7d1243ec9360a0d270940b3f25314553520"} Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.985239 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-fcr9c" Nov 24 12:18:50 crc kubenswrapper[4782]: I1124 12:18:50.985252 4782 scope.go:117] "RemoveContainer" containerID="f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.006948 4782 scope.go:117] "RemoveContainer" containerID="9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.016448 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.026689 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-fcr9c"] Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.041523 4782 scope.go:117] "RemoveContainer" containerID="f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94" Nov 24 12:18:51 crc kubenswrapper[4782]: E1124 12:18:51.041948 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94\": container with ID starting with f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94 not found: ID does not exist" containerID="f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.041981 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94"} err="failed to get container status \"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94\": rpc error: code = NotFound desc = could not find container \"f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94\": container with ID starting with f4c5e375caecab9ec61f042990b14f0428057bb8c50d2341b8662c81ab8e5d94 not found: ID does not exist" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.042006 4782 scope.go:117] "RemoveContainer" containerID="9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6" Nov 24 12:18:51 crc kubenswrapper[4782]: E1124 12:18:51.042283 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6\": container with ID starting with 9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6 not found: ID does not exist" containerID="9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.042317 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6"} err="failed to get container status \"9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6\": rpc error: code = NotFound desc = could not find container \"9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6\": container with ID starting with 9993024b825ce651ab5b093dbbfedd95de2f25b23b5541574af73b967c1bd8f6 not found: ID does not exist" Nov 24 12:18:51 crc kubenswrapper[4782]: I1124 12:18:51.502105 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" path="/var/lib/kubelet/pods/a9dc81d7-8424-41df-bbcb-f3da0637b38e/volumes" Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.079751 4782 generic.go:334] "Generic (PLEG): container finished" podID="63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9" containerID="8b52cac322d06695bdfc858ba9e9d4dcf2d828d11e1ebdc79deaf8b909683e19" exitCode=0 Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.079813 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9","Type":"ContainerDied","Data":"8b52cac322d06695bdfc858ba9e9d4dcf2d828d11e1ebdc79deaf8b909683e19"} Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.082749 4782 generic.go:334] "Generic (PLEG): container finished" podID="39483c87-eb4a-4adf-81de-ae60ec596fe8" containerID="55bbd542f7aca933fd2da98e90a0ca6fa9e0dd4ebcfa175280d8fe26f5cbe8a5" exitCode=0 Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.082785 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39483c87-eb4a-4adf-81de-ae60ec596fe8","Type":"ContainerDied","Data":"55bbd542f7aca933fd2da98e90a0ca6fa9e0dd4ebcfa175280d8fe26f5cbe8a5"} Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.677181 4782 scope.go:117] "RemoveContainer" containerID="aa520cd9a15189177e5356fbcc528ef643480fc50a0d0b2fe1938a519bcfd0d0" Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.705068 4782 scope.go:117] "RemoveContainer" containerID="9d4d407e5e78e5878d1581fa450294b3cfd8631b292d94ba5495ed5889634558" Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.770036 4782 scope.go:117] "RemoveContainer" containerID="a9678f10945e1b4ebc929226745a5e2a662caa0cb8e0c89afc643b644dfef8cf" Nov 24 12:19:01 crc kubenswrapper[4782]: I1124 12:19:01.800903 4782 scope.go:117] "RemoveContainer" containerID="a6055be165c9235d948bcc160349d566b692b228dcf1932854b96c1f7eff2baa" Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.091690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9","Type":"ContainerStarted","Data":"a7ef3ae52567750350fc896c00e4d2127af0a5e90761c7d1b8033169b41ce0f7"} Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.091926 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.095052 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39483c87-eb4a-4adf-81de-ae60ec596fe8","Type":"ContainerStarted","Data":"630913dc7950f82a8d7eb11721733e74b83440e2d5307538c56e2c4e99ec77c8"} Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.095516 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.117913 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.117898028 podStartE2EDuration="36.117898028s" podCreationTimestamp="2025-11-24 12:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:02.115362059 +0000 UTC m=+1391.359195828" watchObservedRunningTime="2025-11-24 12:19:02.117898028 +0000 UTC m=+1391.361731797" Nov 24 12:19:02 crc kubenswrapper[4782]: I1124 12:19:02.143211 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.143193058 podStartE2EDuration="36.143193058s" podCreationTimestamp="2025-11-24 12:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:02.137831064 +0000 UTC m=+1391.381664833" watchObservedRunningTime="2025-11-24 12:19:02.143193058 +0000 UTC m=+1391.387026837" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.074735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:03 crc kubenswrapper[4782]: E1124 12:19:03.075239 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075261 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: E1124 12:19:03.075280 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075288 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: E1124 12:19:03.075301 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="init" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075309 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="init" Nov 24 12:19:03 crc kubenswrapper[4782]: E1124 12:19:03.075338 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="init" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075346 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="init" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075601 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dc81d7-8424-41df-bbcb-f3da0637b38e" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.075634 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb193b3-10e9-491a-b0cf-4e7c00375198" containerName="dnsmasq-dns" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.077296 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.090889 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.215835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.216083 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdz7\" (UniqueName: \"kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.216514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.318879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.319269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.319493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdz7\" (UniqueName: \"kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.320119 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.320834 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.377236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdz7\" (UniqueName: \"kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7\") pod \"redhat-operators-hglhm\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.407421 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:03 crc kubenswrapper[4782]: I1124 12:19:03.904388 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:04 crc kubenswrapper[4782]: I1124 12:19:04.128072 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerStarted","Data":"2505cd3ee824c9e1eca983fa97977f19918efeaf4a86e91c5dd5d33753556aec"} Nov 24 12:19:05 crc kubenswrapper[4782]: I1124 12:19:05.138556 4782 generic.go:334] "Generic (PLEG): container finished" podID="cee57c00-8e76-40b5-a314-5505ec725637" containerID="ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c" exitCode=0 Nov 24 12:19:05 crc kubenswrapper[4782]: I1124 12:19:05.138620 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerDied","Data":"ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c"} Nov 24 12:19:06 crc kubenswrapper[4782]: I1124 12:19:06.148904 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerStarted","Data":"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271"} Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.174595 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24"] Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.179308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.182100 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.182450 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.182578 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.182690 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.199688 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24"] Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.341474 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.341555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.341760 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.342008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qr4s\" (UniqueName: \"kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.443518 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qr4s\" (UniqueName: \"kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.443629 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.443677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.443710 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.448535 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.449398 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.452818 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.463280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qr4s\" (UniqueName: \"kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:08 crc kubenswrapper[4782]: I1124 12:19:08.506809 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:09 crc kubenswrapper[4782]: I1124 12:19:09.378396 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24"] Nov 24 12:19:10 crc kubenswrapper[4782]: I1124 12:19:10.184243 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" event={"ID":"aa0cf12a-8750-4351-a6a7-e66bf1bb074c","Type":"ContainerStarted","Data":"b304c178dde720f78aeb1ba84b503447e50365d3ac53f3b63bbf9e6d6ebf2927"} Nov 24 12:19:11 crc kubenswrapper[4782]: I1124 12:19:11.198570 4782 generic.go:334] "Generic (PLEG): container finished" podID="cee57c00-8e76-40b5-a314-5505ec725637" containerID="fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271" exitCode=0 Nov 24 12:19:11 crc kubenswrapper[4782]: I1124 12:19:11.198619 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerDied","Data":"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271"} Nov 24 12:19:12 crc kubenswrapper[4782]: I1124 12:19:12.219703 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerStarted","Data":"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661"} Nov 24 12:19:12 crc kubenswrapper[4782]: I1124 12:19:12.251123 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hglhm" podStartSLOduration=2.743637953 podStartE2EDuration="9.251105526s" podCreationTimestamp="2025-11-24 12:19:03 +0000 UTC" firstStartedPulling="2025-11-24 12:19:05.140319722 +0000 UTC m=+1394.384153491" lastFinishedPulling="2025-11-24 12:19:11.647787295 +0000 UTC m=+1400.891621064" observedRunningTime="2025-11-24 12:19:12.239089923 +0000 UTC m=+1401.482923692" watchObservedRunningTime="2025-11-24 12:19:12.251105526 +0000 UTC m=+1401.494939295" Nov 24 12:19:13 crc kubenswrapper[4782]: I1124 12:19:13.408195 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:13 crc kubenswrapper[4782]: I1124 12:19:13.408605 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:14 crc kubenswrapper[4782]: I1124 12:19:14.461503 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hglhm" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" probeResult="failure" output=< Nov 24 12:19:14 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:19:14 crc kubenswrapper[4782]: > Nov 24 12:19:16 crc kubenswrapper[4782]: I1124 12:19:16.539610 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:19:17 crc kubenswrapper[4782]: I1124 12:19:17.174577 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:19:24 crc kubenswrapper[4782]: I1124 12:19:24.350734 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" event={"ID":"aa0cf12a-8750-4351-a6a7-e66bf1bb074c","Type":"ContainerStarted","Data":"573709a47aea20ce343758786f49477cf414c4bb1c2c89df74af37c94c7b7990"} Nov 24 12:19:24 crc kubenswrapper[4782]: I1124 12:19:24.374491 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" podStartSLOduration=2.2805248430000002 podStartE2EDuration="16.37447166s" podCreationTimestamp="2025-11-24 12:19:08 +0000 UTC" firstStartedPulling="2025-11-24 12:19:09.403110377 +0000 UTC m=+1398.646944146" lastFinishedPulling="2025-11-24 12:19:23.497057194 +0000 UTC m=+1412.740890963" observedRunningTime="2025-11-24 12:19:24.366781463 +0000 UTC m=+1413.610615252" watchObservedRunningTime="2025-11-24 12:19:24.37447166 +0000 UTC m=+1413.618305439" Nov 24 12:19:24 crc kubenswrapper[4782]: I1124 12:19:24.455841 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hglhm" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" probeResult="failure" output=< Nov 24 12:19:24 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:19:24 crc kubenswrapper[4782]: > Nov 24 12:19:34 crc kubenswrapper[4782]: I1124 12:19:34.458801 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hglhm" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" probeResult="failure" output=< Nov 24 12:19:34 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:19:34 crc kubenswrapper[4782]: > Nov 24 12:19:36 crc kubenswrapper[4782]: I1124 12:19:36.453352 4782 generic.go:334] "Generic (PLEG): container finished" podID="aa0cf12a-8750-4351-a6a7-e66bf1bb074c" containerID="573709a47aea20ce343758786f49477cf414c4bb1c2c89df74af37c94c7b7990" exitCode=0 Nov 24 12:19:36 crc kubenswrapper[4782]: I1124 12:19:36.453400 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" event={"ID":"aa0cf12a-8750-4351-a6a7-e66bf1bb074c","Type":"ContainerDied","Data":"573709a47aea20ce343758786f49477cf414c4bb1c2c89df74af37c94c7b7990"} Nov 24 12:19:37 crc kubenswrapper[4782]: I1124 12:19:37.884088 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.002865 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle\") pod \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.002943 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key\") pod \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.002980 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qr4s\" (UniqueName: \"kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s\") pod \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.003027 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory\") pod \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\" (UID: \"aa0cf12a-8750-4351-a6a7-e66bf1bb074c\") " Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.008700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s" (OuterVolumeSpecName: "kube-api-access-9qr4s") pod "aa0cf12a-8750-4351-a6a7-e66bf1bb074c" (UID: "aa0cf12a-8750-4351-a6a7-e66bf1bb074c"). InnerVolumeSpecName "kube-api-access-9qr4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.009769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "aa0cf12a-8750-4351-a6a7-e66bf1bb074c" (UID: "aa0cf12a-8750-4351-a6a7-e66bf1bb074c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.032234 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory" (OuterVolumeSpecName: "inventory") pod "aa0cf12a-8750-4351-a6a7-e66bf1bb074c" (UID: "aa0cf12a-8750-4351-a6a7-e66bf1bb074c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.037970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa0cf12a-8750-4351-a6a7-e66bf1bb074c" (UID: "aa0cf12a-8750-4351-a6a7-e66bf1bb074c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.104978 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.105021 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.105039 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qr4s\" (UniqueName: \"kubernetes.io/projected/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-kube-api-access-9qr4s\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.105055 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0cf12a-8750-4351-a6a7-e66bf1bb074c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.473150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" event={"ID":"aa0cf12a-8750-4351-a6a7-e66bf1bb074c","Type":"ContainerDied","Data":"b304c178dde720f78aeb1ba84b503447e50365d3ac53f3b63bbf9e6d6ebf2927"} Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.473189 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b304c178dde720f78aeb1ba84b503447e50365d3ac53f3b63bbf9e6d6ebf2927" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.473307 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.557362 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l"] Nov 24 12:19:38 crc kubenswrapper[4782]: E1124 12:19:38.557788 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0cf12a-8750-4351-a6a7-e66bf1bb074c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.557807 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0cf12a-8750-4351-a6a7-e66bf1bb074c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.558061 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0cf12a-8750-4351-a6a7-e66bf1bb074c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.560573 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.567773 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.567926 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.568323 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.568690 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.583820 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l"] Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.718806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.719054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wpjp\" (UniqueName: \"kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.719236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.821069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.821681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wpjp\" (UniqueName: \"kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.821851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.825269 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.825281 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.844141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wpjp\" (UniqueName: \"kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d2g2l\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:38 crc kubenswrapper[4782]: I1124 12:19:38.879143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:39 crc kubenswrapper[4782]: I1124 12:19:39.419837 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l"] Nov 24 12:19:39 crc kubenswrapper[4782]: I1124 12:19:39.483315 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" event={"ID":"0b6970e9-155c-4b80-98ee-9305e8b942f2","Type":"ContainerStarted","Data":"c93cf3178ca0318ada43db3755117d2118a4ca5eb490ae530708801eb4b56f93"} Nov 24 12:19:40 crc kubenswrapper[4782]: I1124 12:19:40.494855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" event={"ID":"0b6970e9-155c-4b80-98ee-9305e8b942f2","Type":"ContainerStarted","Data":"5b989b87eca3aa72820b5e7d0da1a259b944c25c93fe8a9f69fab1b389286377"} Nov 24 12:19:40 crc kubenswrapper[4782]: I1124 12:19:40.514283 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" podStartSLOduration=1.866012257 podStartE2EDuration="2.514263827s" podCreationTimestamp="2025-11-24 12:19:38 +0000 UTC" firstStartedPulling="2025-11-24 12:19:39.424058604 +0000 UTC m=+1428.667892373" lastFinishedPulling="2025-11-24 12:19:40.072310174 +0000 UTC m=+1429.316143943" observedRunningTime="2025-11-24 12:19:40.510858875 +0000 UTC m=+1429.754692644" watchObservedRunningTime="2025-11-24 12:19:40.514263827 +0000 UTC m=+1429.758097616" Nov 24 12:19:43 crc kubenswrapper[4782]: I1124 12:19:43.462881 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:43 crc kubenswrapper[4782]: I1124 12:19:43.507539 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:43 crc kubenswrapper[4782]: I1124 12:19:43.538429 4782 generic.go:334] "Generic (PLEG): container finished" podID="0b6970e9-155c-4b80-98ee-9305e8b942f2" containerID="5b989b87eca3aa72820b5e7d0da1a259b944c25c93fe8a9f69fab1b389286377" exitCode=0 Nov 24 12:19:43 crc kubenswrapper[4782]: I1124 12:19:43.538528 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" event={"ID":"0b6970e9-155c-4b80-98ee-9305e8b942f2","Type":"ContainerDied","Data":"5b989b87eca3aa72820b5e7d0da1a259b944c25c93fe8a9f69fab1b389286377"} Nov 24 12:19:43 crc kubenswrapper[4782]: I1124 12:19:43.695724 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:44 crc kubenswrapper[4782]: I1124 12:19:44.546567 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hglhm" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" containerID="cri-o://f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661" gracePeriod=2 Nov 24 12:19:44 crc kubenswrapper[4782]: I1124 12:19:44.946713 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.041483 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory\") pod \"0b6970e9-155c-4b80-98ee-9305e8b942f2\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.041546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wpjp\" (UniqueName: \"kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp\") pod \"0b6970e9-155c-4b80-98ee-9305e8b942f2\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.041633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key\") pod \"0b6970e9-155c-4b80-98ee-9305e8b942f2\" (UID: \"0b6970e9-155c-4b80-98ee-9305e8b942f2\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.049701 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp" (OuterVolumeSpecName: "kube-api-access-6wpjp") pod "0b6970e9-155c-4b80-98ee-9305e8b942f2" (UID: "0b6970e9-155c-4b80-98ee-9305e8b942f2"). InnerVolumeSpecName "kube-api-access-6wpjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.071680 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.077863 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory" (OuterVolumeSpecName: "inventory") pod "0b6970e9-155c-4b80-98ee-9305e8b942f2" (UID: "0b6970e9-155c-4b80-98ee-9305e8b942f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.088018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b6970e9-155c-4b80-98ee-9305e8b942f2" (UID: "0b6970e9-155c-4b80-98ee-9305e8b942f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.144130 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.144167 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wpjp\" (UniqueName: \"kubernetes.io/projected/0b6970e9-155c-4b80-98ee-9305e8b942f2-kube-api-access-6wpjp\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.144177 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b6970e9-155c-4b80-98ee-9305e8b942f2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.245418 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities\") pod \"cee57c00-8e76-40b5-a314-5505ec725637\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.245508 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fdz7\" (UniqueName: \"kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7\") pod \"cee57c00-8e76-40b5-a314-5505ec725637\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.245573 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content\") pod \"cee57c00-8e76-40b5-a314-5505ec725637\" (UID: \"cee57c00-8e76-40b5-a314-5505ec725637\") " Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.246338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities" (OuterVolumeSpecName: "utilities") pod "cee57c00-8e76-40b5-a314-5505ec725637" (UID: "cee57c00-8e76-40b5-a314-5505ec725637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.249198 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7" (OuterVolumeSpecName: "kube-api-access-6fdz7") pod "cee57c00-8e76-40b5-a314-5505ec725637" (UID: "cee57c00-8e76-40b5-a314-5505ec725637"). InnerVolumeSpecName "kube-api-access-6fdz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.333578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cee57c00-8e76-40b5-a314-5505ec725637" (UID: "cee57c00-8e76-40b5-a314-5505ec725637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.348034 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fdz7\" (UniqueName: \"kubernetes.io/projected/cee57c00-8e76-40b5-a314-5505ec725637-kube-api-access-6fdz7\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.348071 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.348080 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee57c00-8e76-40b5-a314-5505ec725637-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.556779 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" event={"ID":"0b6970e9-155c-4b80-98ee-9305e8b942f2","Type":"ContainerDied","Data":"c93cf3178ca0318ada43db3755117d2118a4ca5eb490ae530708801eb4b56f93"} Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.557056 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93cf3178ca0318ada43db3755117d2118a4ca5eb490ae530708801eb4b56f93" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.556790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d2g2l" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.559317 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hglhm" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.559357 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerDied","Data":"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661"} Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.559422 4782 scope.go:117] "RemoveContainer" containerID="f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.560019 4782 generic.go:334] "Generic (PLEG): container finished" podID="cee57c00-8e76-40b5-a314-5505ec725637" containerID="f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661" exitCode=0 Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.560113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hglhm" event={"ID":"cee57c00-8e76-40b5-a314-5505ec725637","Type":"ContainerDied","Data":"2505cd3ee824c9e1eca983fa97977f19918efeaf4a86e91c5dd5d33753556aec"} Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.591578 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.609204 4782 scope.go:117] "RemoveContainer" containerID="fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.613340 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hglhm"] Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643055 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw"] Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.643605 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="extract-utilities" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643632 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="extract-utilities" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.643667 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="extract-content" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643684 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="extract-content" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.643710 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643718 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.643745 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6970e9-155c-4b80-98ee-9305e8b942f2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643755 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6970e9-155c-4b80-98ee-9305e8b942f2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.643977 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6970e9-155c-4b80-98ee-9305e8b942f2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.644012 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee57c00-8e76-40b5-a314-5505ec725637" containerName="registry-server" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.644715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.650110 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.650574 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.651387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.663527 4782 scope.go:117] "RemoveContainer" containerID="ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.665153 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.678958 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw"] Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.737906 4782 scope.go:117] "RemoveContainer" containerID="f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.738347 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661\": container with ID starting with f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661 not found: ID does not exist" containerID="f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.738489 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661"} err="failed to get container status \"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661\": rpc error: code = NotFound desc = could not find container \"f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661\": container with ID starting with f4b8a8ac95e3193180b6a660f26691306c916d0add269c1e4a2c063229225661 not found: ID does not exist" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.738523 4782 scope.go:117] "RemoveContainer" containerID="fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.738934 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271\": container with ID starting with fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271 not found: ID does not exist" containerID="fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.738965 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271"} err="failed to get container status \"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271\": rpc error: code = NotFound desc = could not find container \"fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271\": container with ID starting with fb38dafc7d2068c108f3727e0d2d4b17f462cc64ae5e0e892b0cb7186c3af271 not found: ID does not exist" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.738985 4782 scope.go:117] "RemoveContainer" containerID="ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c" Nov 24 12:19:45 crc kubenswrapper[4782]: E1124 12:19:45.739234 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c\": container with ID starting with ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c not found: ID does not exist" containerID="ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.739266 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c"} err="failed to get container status \"ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c\": rpc error: code = NotFound desc = could not find container \"ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c\": container with ID starting with ba73aa9cf3ff558014d481cd56cd84a5fa76f05f7cca33623e760304c671bc3c not found: ID does not exist" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.777342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.777493 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vcz\" (UniqueName: \"kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.777553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.777577 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.879638 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.879808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vcz\" (UniqueName: \"kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.879896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.879929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.884093 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.884970 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.895149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:45 crc kubenswrapper[4782]: I1124 12:19:45.899085 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vcz\" (UniqueName: \"kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:46 crc kubenswrapper[4782]: I1124 12:19:46.072550 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:19:46 crc kubenswrapper[4782]: I1124 12:19:46.565206 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw"] Nov 24 12:19:47 crc kubenswrapper[4782]: I1124 12:19:47.500652 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee57c00-8e76-40b5-a314-5505ec725637" path="/var/lib/kubelet/pods/cee57c00-8e76-40b5-a314-5505ec725637/volumes" Nov 24 12:19:47 crc kubenswrapper[4782]: I1124 12:19:47.581888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" event={"ID":"672fd75c-f2f7-4396-a11e-e4e5abf8ab13","Type":"ContainerStarted","Data":"c5e8e60811b76aa34534f3488d634dc3eba813beb8183af529abe4d1107f9704"} Nov 24 12:19:47 crc kubenswrapper[4782]: I1124 12:19:47.582260 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" event={"ID":"672fd75c-f2f7-4396-a11e-e4e5abf8ab13","Type":"ContainerStarted","Data":"bedbdc9fb9cc4d7de1a8eccf03a3f3a06b48f1904c92c2138eb53b513e755b48"} Nov 24 12:19:47 crc kubenswrapper[4782]: I1124 12:19:47.607910 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" podStartSLOduration=1.981147998 podStartE2EDuration="2.607886549s" podCreationTimestamp="2025-11-24 12:19:45 +0000 UTC" firstStartedPulling="2025-11-24 12:19:46.572067708 +0000 UTC m=+1435.815901477" lastFinishedPulling="2025-11-24 12:19:47.198806259 +0000 UTC m=+1436.442640028" observedRunningTime="2025-11-24 12:19:47.601654098 +0000 UTC m=+1436.845487887" watchObservedRunningTime="2025-11-24 12:19:47.607886549 +0000 UTC m=+1436.851720328" Nov 24 12:19:55 crc kubenswrapper[4782]: I1124 12:19:55.903209 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:19:55 crc kubenswrapper[4782]: I1124 12:19:55.906837 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:55 crc kubenswrapper[4782]: I1124 12:19:55.930253 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.076967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.077115 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.077201 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxsr\" (UniqueName: \"kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.178827 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxsr\" (UniqueName: \"kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.178948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.179004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.179521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.179529 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.202958 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxsr\" (UniqueName: \"kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr\") pod \"community-operators-hl7d9\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.223834 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:19:56 crc kubenswrapper[4782]: I1124 12:19:56.711453 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:19:57 crc kubenswrapper[4782]: I1124 12:19:57.695301 4782 generic.go:334] "Generic (PLEG): container finished" podID="65e42773-8442-4447-bb7b-830e50556fab" containerID="90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24" exitCode=0 Nov 24 12:19:57 crc kubenswrapper[4782]: I1124 12:19:57.695488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerDied","Data":"90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24"} Nov 24 12:19:57 crc kubenswrapper[4782]: I1124 12:19:57.695626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerStarted","Data":"ad6319c48a34b7c7608961900cb64583827486cd969ea281feb620932bc1a4c8"} Nov 24 12:19:58 crc kubenswrapper[4782]: I1124 12:19:58.707006 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerStarted","Data":"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185"} Nov 24 12:20:00 crc kubenswrapper[4782]: I1124 12:20:00.411218 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:20:00 crc kubenswrapper[4782]: I1124 12:20:00.411638 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:20:00 crc kubenswrapper[4782]: I1124 12:20:00.726831 4782 generic.go:334] "Generic (PLEG): container finished" podID="65e42773-8442-4447-bb7b-830e50556fab" containerID="df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185" exitCode=0 Nov 24 12:20:00 crc kubenswrapper[4782]: I1124 12:20:00.726876 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerDied","Data":"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185"} Nov 24 12:20:01 crc kubenswrapper[4782]: I1124 12:20:01.737129 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerStarted","Data":"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e"} Nov 24 12:20:01 crc kubenswrapper[4782]: I1124 12:20:01.758017 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hl7d9" podStartSLOduration=3.33678226 podStartE2EDuration="6.757996456s" podCreationTimestamp="2025-11-24 12:19:55 +0000 UTC" firstStartedPulling="2025-11-24 12:19:57.697012565 +0000 UTC m=+1446.940846334" lastFinishedPulling="2025-11-24 12:20:01.118226751 +0000 UTC m=+1450.362060530" observedRunningTime="2025-11-24 12:20:01.752636387 +0000 UTC m=+1450.996470176" watchObservedRunningTime="2025-11-24 12:20:01.757996456 +0000 UTC m=+1451.001830225" Nov 24 12:20:01 crc kubenswrapper[4782]: I1124 12:20:01.945615 4782 scope.go:117] "RemoveContainer" containerID="0dde0431b5b38235ea5f82b2cd0ee70ec5dcb36654805639b6c376c7f2a96508" Nov 24 12:20:01 crc kubenswrapper[4782]: I1124 12:20:01.992680 4782 scope.go:117] "RemoveContainer" containerID="64cbbdb567acf5e868c8f354beb70b099e03307cd06d11f8948a9b2d2ca6c089" Nov 24 12:20:06 crc kubenswrapper[4782]: I1124 12:20:06.224022 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:06 crc kubenswrapper[4782]: I1124 12:20:06.224492 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:06 crc kubenswrapper[4782]: I1124 12:20:06.267690 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:06 crc kubenswrapper[4782]: I1124 12:20:06.826102 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:06 crc kubenswrapper[4782]: I1124 12:20:06.875986 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:20:08 crc kubenswrapper[4782]: I1124 12:20:08.795776 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hl7d9" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="registry-server" containerID="cri-o://328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e" gracePeriod=2 Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.294208 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.429766 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities\") pod \"65e42773-8442-4447-bb7b-830e50556fab\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.429957 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cxsr\" (UniqueName: \"kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr\") pod \"65e42773-8442-4447-bb7b-830e50556fab\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.430013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content\") pod \"65e42773-8442-4447-bb7b-830e50556fab\" (UID: \"65e42773-8442-4447-bb7b-830e50556fab\") " Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.431915 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities" (OuterVolumeSpecName: "utilities") pod "65e42773-8442-4447-bb7b-830e50556fab" (UID: "65e42773-8442-4447-bb7b-830e50556fab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.436438 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr" (OuterVolumeSpecName: "kube-api-access-7cxsr") pod "65e42773-8442-4447-bb7b-830e50556fab" (UID: "65e42773-8442-4447-bb7b-830e50556fab"). InnerVolumeSpecName "kube-api-access-7cxsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.479475 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65e42773-8442-4447-bb7b-830e50556fab" (UID: "65e42773-8442-4447-bb7b-830e50556fab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.533092 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cxsr\" (UniqueName: \"kubernetes.io/projected/65e42773-8442-4447-bb7b-830e50556fab-kube-api-access-7cxsr\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.533133 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.533146 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e42773-8442-4447-bb7b-830e50556fab-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.808181 4782 generic.go:334] "Generic (PLEG): container finished" podID="65e42773-8442-4447-bb7b-830e50556fab" containerID="328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e" exitCode=0 Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.808229 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerDied","Data":"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e"} Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.808261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl7d9" event={"ID":"65e42773-8442-4447-bb7b-830e50556fab","Type":"ContainerDied","Data":"ad6319c48a34b7c7608961900cb64583827486cd969ea281feb620932bc1a4c8"} Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.808286 4782 scope.go:117] "RemoveContainer" containerID="328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.808447 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl7d9" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.841392 4782 scope.go:117] "RemoveContainer" containerID="df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.847457 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.869599 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hl7d9"] Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.873911 4782 scope.go:117] "RemoveContainer" containerID="90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.909047 4782 scope.go:117] "RemoveContainer" containerID="328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e" Nov 24 12:20:09 crc kubenswrapper[4782]: E1124 12:20:09.909586 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e\": container with ID starting with 328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e not found: ID does not exist" containerID="328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.909635 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e"} err="failed to get container status \"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e\": rpc error: code = NotFound desc = could not find container \"328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e\": container with ID starting with 328ed36ce10590e397e58151b513807f6fda784335ce6267f1ea66ab9aab8d2e not found: ID does not exist" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.909670 4782 scope.go:117] "RemoveContainer" containerID="df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185" Nov 24 12:20:09 crc kubenswrapper[4782]: E1124 12:20:09.910037 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185\": container with ID starting with df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185 not found: ID does not exist" containerID="df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.910074 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185"} err="failed to get container status \"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185\": rpc error: code = NotFound desc = could not find container \"df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185\": container with ID starting with df6687736ef17b1f60831a3db54a4c884bd15560b5203291454d6aa06b663185 not found: ID does not exist" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.910113 4782 scope.go:117] "RemoveContainer" containerID="90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24" Nov 24 12:20:09 crc kubenswrapper[4782]: E1124 12:20:09.910423 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24\": container with ID starting with 90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24 not found: ID does not exist" containerID="90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24" Nov 24 12:20:09 crc kubenswrapper[4782]: I1124 12:20:09.910452 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24"} err="failed to get container status \"90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24\": rpc error: code = NotFound desc = could not find container \"90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24\": container with ID starting with 90c41cd16238a5b6080c1743d7b644893c871e58a47876eef122afe2d50acd24 not found: ID does not exist" Nov 24 12:20:11 crc kubenswrapper[4782]: I1124 12:20:11.500119 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e42773-8442-4447-bb7b-830e50556fab" path="/var/lib/kubelet/pods/65e42773-8442-4447-bb7b-830e50556fab/volumes" Nov 24 12:20:30 crc kubenswrapper[4782]: I1124 12:20:30.411108 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:20:30 crc kubenswrapper[4782]: I1124 12:20:30.411647 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:21:00 crc kubenswrapper[4782]: I1124 12:21:00.411297 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:21:00 crc kubenswrapper[4782]: I1124 12:21:00.411938 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:21:00 crc kubenswrapper[4782]: I1124 12:21:00.411988 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:21:00 crc kubenswrapper[4782]: I1124 12:21:00.412772 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:21:00 crc kubenswrapper[4782]: I1124 12:21:00.412826 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" gracePeriod=600 Nov 24 12:21:00 crc kubenswrapper[4782]: E1124 12:21:00.532775 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:21:01 crc kubenswrapper[4782]: I1124 12:21:01.270201 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" exitCode=0 Nov 24 12:21:01 crc kubenswrapper[4782]: I1124 12:21:01.270248 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac"} Nov 24 12:21:01 crc kubenswrapper[4782]: I1124 12:21:01.270282 4782 scope.go:117] "RemoveContainer" containerID="312faf553f7586c5bdcb5502ffdf818587cd31bfce204c8d9ae99d508ff07095" Nov 24 12:21:01 crc kubenswrapper[4782]: I1124 12:21:01.270949 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:21:01 crc kubenswrapper[4782]: E1124 12:21:01.271218 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:21:16 crc kubenswrapper[4782]: I1124 12:21:16.491562 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:21:16 crc kubenswrapper[4782]: E1124 12:21:16.492509 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:21:27 crc kubenswrapper[4782]: I1124 12:21:27.492141 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:21:27 crc kubenswrapper[4782]: E1124 12:21:27.494842 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:21:41 crc kubenswrapper[4782]: I1124 12:21:41.508148 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:21:41 crc kubenswrapper[4782]: E1124 12:21:41.509360 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:21:54 crc kubenswrapper[4782]: I1124 12:21:54.491288 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:21:54 crc kubenswrapper[4782]: E1124 12:21:54.491971 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:05 crc kubenswrapper[4782]: I1124 12:22:05.491484 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:22:05 crc kubenswrapper[4782]: E1124 12:22:05.492359 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:16 crc kubenswrapper[4782]: I1124 12:22:16.491622 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:22:16 crc kubenswrapper[4782]: E1124 12:22:16.492316 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:28 crc kubenswrapper[4782]: I1124 12:22:28.048003 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ndnmn"] Nov 24 12:22:28 crc kubenswrapper[4782]: I1124 12:22:28.063641 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fa32-account-create-xcqvn"] Nov 24 12:22:28 crc kubenswrapper[4782]: I1124 12:22:28.075545 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ndnmn"] Nov 24 12:22:28 crc kubenswrapper[4782]: I1124 12:22:28.085070 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fa32-account-create-xcqvn"] Nov 24 12:22:28 crc kubenswrapper[4782]: I1124 12:22:28.490798 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:22:28 crc kubenswrapper[4782]: E1124 12:22:28.491465 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:29 crc kubenswrapper[4782]: I1124 12:22:29.502037 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82af1928-0c70-41fd-8402-52f61b5a5ccf" path="/var/lib/kubelet/pods/82af1928-0c70-41fd-8402-52f61b5a5ccf/volumes" Nov 24 12:22:29 crc kubenswrapper[4782]: I1124 12:22:29.505024 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd25993-6371-49a5-bf60-29da33949583" path="/var/lib/kubelet/pods/bcd25993-6371-49a5-bf60-29da33949583/volumes" Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.028808 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ec86-account-create-2vspw"] Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.036146 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5mdhg"] Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.044572 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ec86-account-create-2vspw"] Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.052706 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5mdhg"] Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.502641 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72848b25-6c96-4159-898d-4b9e7ee158bc" path="/var/lib/kubelet/pods/72848b25-6c96-4159-898d-4b9e7ee158bc/volumes" Nov 24 12:22:33 crc kubenswrapper[4782]: I1124 12:22:33.503535 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763d5f9f-7507-45f0-a872-222bc321e3d4" path="/var/lib/kubelet/pods/763d5f9f-7507-45f0-a872-222bc321e3d4/volumes" Nov 24 12:22:34 crc kubenswrapper[4782]: I1124 12:22:34.025152 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rz9l9"] Nov 24 12:22:34 crc kubenswrapper[4782]: I1124 12:22:34.031794 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rz9l9"] Nov 24 12:22:34 crc kubenswrapper[4782]: I1124 12:22:34.041142 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-49d6-account-create-465ld"] Nov 24 12:22:34 crc kubenswrapper[4782]: I1124 12:22:34.051048 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-49d6-account-create-465ld"] Nov 24 12:22:35 crc kubenswrapper[4782]: I1124 12:22:35.503578 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f48cdf3-4a58-4730-a0f1-914a253e7ab1" path="/var/lib/kubelet/pods/0f48cdf3-4a58-4730-a0f1-914a253e7ab1/volumes" Nov 24 12:22:35 crc kubenswrapper[4782]: I1124 12:22:35.506297 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e80316-d322-433c-a47c-6f7cfcf1c267" path="/var/lib/kubelet/pods/e0e80316-d322-433c-a47c-6f7cfcf1c267/volumes" Nov 24 12:22:39 crc kubenswrapper[4782]: I1124 12:22:39.491243 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:22:39 crc kubenswrapper[4782]: E1124 12:22:39.492236 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:51 crc kubenswrapper[4782]: I1124 12:22:51.499512 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:22:51 crc kubenswrapper[4782]: E1124 12:22:51.500210 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:22:57 crc kubenswrapper[4782]: I1124 12:22:57.359705 4782 generic.go:334] "Generic (PLEG): container finished" podID="672fd75c-f2f7-4396-a11e-e4e5abf8ab13" containerID="c5e8e60811b76aa34534f3488d634dc3eba813beb8183af529abe4d1107f9704" exitCode=0 Nov 24 12:22:57 crc kubenswrapper[4782]: I1124 12:22:57.359809 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" event={"ID":"672fd75c-f2f7-4396-a11e-e4e5abf8ab13","Type":"ContainerDied","Data":"c5e8e60811b76aa34534f3488d634dc3eba813beb8183af529abe4d1107f9704"} Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.734988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.903800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79vcz\" (UniqueName: \"kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz\") pod \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.903874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory\") pod \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.903972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key\") pod \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.904137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle\") pod \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\" (UID: \"672fd75c-f2f7-4396-a11e-e4e5abf8ab13\") " Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.913826 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz" (OuterVolumeSpecName: "kube-api-access-79vcz") pod "672fd75c-f2f7-4396-a11e-e4e5abf8ab13" (UID: "672fd75c-f2f7-4396-a11e-e4e5abf8ab13"). InnerVolumeSpecName "kube-api-access-79vcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.927398 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "672fd75c-f2f7-4396-a11e-e4e5abf8ab13" (UID: "672fd75c-f2f7-4396-a11e-e4e5abf8ab13"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.954994 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "672fd75c-f2f7-4396-a11e-e4e5abf8ab13" (UID: "672fd75c-f2f7-4396-a11e-e4e5abf8ab13"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:22:58 crc kubenswrapper[4782]: I1124 12:22:58.956072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory" (OuterVolumeSpecName: "inventory") pod "672fd75c-f2f7-4396-a11e-e4e5abf8ab13" (UID: "672fd75c-f2f7-4396-a11e-e4e5abf8ab13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.006750 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.006966 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79vcz\" (UniqueName: \"kubernetes.io/projected/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-kube-api-access-79vcz\") on node \"crc\" DevicePath \"\"" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.007075 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.007154 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/672fd75c-f2f7-4396-a11e-e4e5abf8ab13-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.377885 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" event={"ID":"672fd75c-f2f7-4396-a11e-e4e5abf8ab13","Type":"ContainerDied","Data":"bedbdc9fb9cc4d7de1a8eccf03a3f3a06b48f1904c92c2138eb53b513e755b48"} Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.377925 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedbdc9fb9cc4d7de1a8eccf03a3f3a06b48f1904c92c2138eb53b513e755b48" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.377970 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478111 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95"] Nov 24 12:22:59 crc kubenswrapper[4782]: E1124 12:22:59.478609 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="registry-server" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478633 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="registry-server" Nov 24 12:22:59 crc kubenswrapper[4782]: E1124 12:22:59.478652 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="672fd75c-f2f7-4396-a11e-e4e5abf8ab13" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478661 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="672fd75c-f2f7-4396-a11e-e4e5abf8ab13" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:22:59 crc kubenswrapper[4782]: E1124 12:22:59.478682 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="extract-utilities" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478714 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="extract-utilities" Nov 24 12:22:59 crc kubenswrapper[4782]: E1124 12:22:59.478730 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="extract-content" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478740 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="extract-content" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.478970 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e42773-8442-4447-bb7b-830e50556fab" containerName="registry-server" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.479004 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="672fd75c-f2f7-4396-a11e-e4e5abf8ab13" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.479894 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.481818 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.482162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.483715 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.484591 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.501145 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95"] Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.617118 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.617186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.617403 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwx5k\" (UniqueName: \"kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.718578 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwx5k\" (UniqueName: \"kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.718707 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.718731 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.721719 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.728482 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.735769 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwx5k\" (UniqueName: \"kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ccf95\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:22:59 crc kubenswrapper[4782]: I1124 12:22:59.796310 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:23:00 crc kubenswrapper[4782]: I1124 12:23:00.329910 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95"] Nov 24 12:23:00 crc kubenswrapper[4782]: I1124 12:23:00.339124 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:23:00 crc kubenswrapper[4782]: I1124 12:23:00.388005 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" event={"ID":"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7","Type":"ContainerStarted","Data":"fe9faeda993bcce46554475bc7d722418cb93c377b0ad732fdfb6ecb0decec4e"} Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.045760 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b6ctv"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.053185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fd2c-account-create-kh9w8"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.062112 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-862f-account-create-wdpn8"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.071722 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b931-account-create-btftx"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.082493 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b6ctv"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.090021 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zbnnb"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.097083 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zbnnb"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.103552 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b931-account-create-btftx"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.110145 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-862f-account-create-wdpn8"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.117429 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fd2c-account-create-kh9w8"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.123674 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f4cxf"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.130056 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f4cxf"] Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.396598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" event={"ID":"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7","Type":"ContainerStarted","Data":"a7140f17177437ba3e2fa684c0b908cfa0b05ad11c7d5581a505f04586f57329"} Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.433549 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" podStartSLOduration=2.034690765 podStartE2EDuration="2.433522833s" podCreationTimestamp="2025-11-24 12:22:59 +0000 UTC" firstStartedPulling="2025-11-24 12:23:00.338931163 +0000 UTC m=+1629.582764932" lastFinishedPulling="2025-11-24 12:23:00.737763231 +0000 UTC m=+1629.981597000" observedRunningTime="2025-11-24 12:23:01.4172301 +0000 UTC m=+1630.661063879" watchObservedRunningTime="2025-11-24 12:23:01.433522833 +0000 UTC m=+1630.677356602" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.507177 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b761020-fa6c-4ec4-b3d0-5eb939867db4" path="/var/lib/kubelet/pods/4b761020-fa6c-4ec4-b3d0-5eb939867db4/volumes" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.511767 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c22dba-5e37-409b-b24f-09afa0abeaa8" path="/var/lib/kubelet/pods/50c22dba-5e37-409b-b24f-09afa0abeaa8/volumes" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.517402 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e8c831-2986-4488-a513-6fc375b64046" path="/var/lib/kubelet/pods/64e8c831-2986-4488-a513-6fc375b64046/volumes" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.519742 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a828c03-9f43-48c5-b19f-43a7a1f7f0c6" path="/var/lib/kubelet/pods/6a828c03-9f43-48c5-b19f-43a7a1f7f0c6/volumes" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.522880 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31274b3-3d46-4f50-b070-3238dba1c066" path="/var/lib/kubelet/pods/f31274b3-3d46-4f50-b070-3238dba1c066/volumes" Nov 24 12:23:01 crc kubenswrapper[4782]: I1124 12:23:01.526750 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc8e8d2-4b6f-43e9-9330-b58dfde90a11" path="/var/lib/kubelet/pods/ffc8e8d2-4b6f-43e9-9330-b58dfde90a11/volumes" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.145976 4782 scope.go:117] "RemoveContainer" containerID="c89d13a523e164b8776c7628cb9ac2682bfe5de3666b4ae219f98cc6b5ee2346" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.218982 4782 scope.go:117] "RemoveContainer" containerID="468de68c7d1d4fe9ebb818ad8ae0af4e375a1d422e1b6151a86ec4369db26627" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.254216 4782 scope.go:117] "RemoveContainer" containerID="a46f1ad047cf36c3b005e337eb35acd7930a1f464fcc02e7036807bf80ca534b" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.291285 4782 scope.go:117] "RemoveContainer" containerID="0f62dc1b7b22e4063937876a88435850ffafa9db3321e67ad2424262f153db1a" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.335834 4782 scope.go:117] "RemoveContainer" containerID="d98871cbaa1f6df1cde30a5a084019eaa955a9a85c74ff9b6bce5fc415979ee1" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.552660 4782 scope.go:117] "RemoveContainer" containerID="3688b8cdfd4b4b2afe67f8c98c1aefdaab981bd82fd599e8b72fea7f4a5853c1" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.602967 4782 scope.go:117] "RemoveContainer" containerID="6378f072ef303144c1ee0baca1bf713741fdcaa127c6ed949c0beb0f2fd6f212" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.656943 4782 scope.go:117] "RemoveContainer" containerID="b2670481543466180dc7fe65d302766fdb1b1ac1c1064dd47f972a4af682b4d7" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.681813 4782 scope.go:117] "RemoveContainer" containerID="c98517cae9b33aaf737abcdc16cd9401952fd08cd8d095e04fd87806d3695b62" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.711080 4782 scope.go:117] "RemoveContainer" containerID="e002faffcb9d0b649f67252d2070f51ea2188a7606919c374c95f8c0e8f71db6" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.730761 4782 scope.go:117] "RemoveContainer" containerID="29dbc560b3a5095722406d265498a0352d09ae3cf9a5f9459e310c23fbc0b1fa" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.754443 4782 scope.go:117] "RemoveContainer" containerID="3fa5a71e889cae74b41b88c3a4fb820d3131f0439d065fd03337179b749af699" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.776311 4782 scope.go:117] "RemoveContainer" containerID="5b6360d9cd9365b8723a869bae7714bb422fad357eb33cfe9ddd7ea78414daa9" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.794030 4782 scope.go:117] "RemoveContainer" containerID="4931d235129e882b3cc93ba0073bdd7c24594d8dd23b7fbd141c64984fff9b72" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.833803 4782 scope.go:117] "RemoveContainer" containerID="4a8587e1a291a5cb1726d10d4828a532dcc3ba2c205cb38d7e02606d91a90288" Nov 24 12:23:02 crc kubenswrapper[4782]: I1124 12:23:02.858995 4782 scope.go:117] "RemoveContainer" containerID="5a9615f9b31ca6cf95f7360da35f57ee061a7b011c03ca3c2696f9449e26f13a" Nov 24 12:23:06 crc kubenswrapper[4782]: I1124 12:23:06.491510 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:23:06 crc kubenswrapper[4782]: E1124 12:23:06.492944 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:23:07 crc kubenswrapper[4782]: I1124 12:23:07.027970 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bvx8c"] Nov 24 12:23:07 crc kubenswrapper[4782]: I1124 12:23:07.036353 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bvx8c"] Nov 24 12:23:07 crc kubenswrapper[4782]: I1124 12:23:07.510489 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e69f3b5-7735-44bd-9a5c-aa6060e04858" path="/var/lib/kubelet/pods/4e69f3b5-7735-44bd-9a5c-aa6060e04858/volumes" Nov 24 12:23:19 crc kubenswrapper[4782]: I1124 12:23:19.491803 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:23:19 crc kubenswrapper[4782]: E1124 12:23:19.492500 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:23:34 crc kubenswrapper[4782]: I1124 12:23:34.491799 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:23:34 crc kubenswrapper[4782]: E1124 12:23:34.492673 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:23:49 crc kubenswrapper[4782]: I1124 12:23:49.491251 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:23:49 crc kubenswrapper[4782]: E1124 12:23:49.492723 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:24:00 crc kubenswrapper[4782]: I1124 12:24:00.491250 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:24:00 crc kubenswrapper[4782]: E1124 12:24:00.494480 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:24:03 crc kubenswrapper[4782]: I1124 12:24:03.190046 4782 scope.go:117] "RemoveContainer" containerID="854752b27206e62ce2345f682908d3037b1348c5f9761b4c21b4ac736d626dc2" Nov 24 12:24:03 crc kubenswrapper[4782]: I1124 12:24:03.217999 4782 scope.go:117] "RemoveContainer" containerID="2cfdeb6bfcea7c353056f9158008d0301187252564f1aa239a37f21e37ca75e7" Nov 24 12:24:03 crc kubenswrapper[4782]: I1124 12:24:03.257253 4782 scope.go:117] "RemoveContainer" containerID="a3b3ecde0310e26419505ebce09b4ae6f94762663149b78913c8a0259c96dadf" Nov 24 12:24:10 crc kubenswrapper[4782]: I1124 12:24:10.046126 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nftg9"] Nov 24 12:24:10 crc kubenswrapper[4782]: I1124 12:24:10.056727 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nftg9"] Nov 24 12:24:11 crc kubenswrapper[4782]: I1124 12:24:11.500764 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce98ec2-7dab-420c-8f56-e80c874419eb" path="/var/lib/kubelet/pods/cce98ec2-7dab-420c-8f56-e80c874419eb/volumes" Nov 24 12:24:15 crc kubenswrapper[4782]: I1124 12:24:15.492223 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:24:15 crc kubenswrapper[4782]: E1124 12:24:15.493022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:24:16 crc kubenswrapper[4782]: I1124 12:24:16.049701 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tzz9s"] Nov 24 12:24:16 crc kubenswrapper[4782]: I1124 12:24:16.059878 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h9mzc"] Nov 24 12:24:16 crc kubenswrapper[4782]: I1124 12:24:16.071962 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tzz9s"] Nov 24 12:24:16 crc kubenswrapper[4782]: I1124 12:24:16.079552 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h9mzc"] Nov 24 12:24:17 crc kubenswrapper[4782]: I1124 12:24:17.503210 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d52143-6c70-4d37-9829-c6ce79b2b8ee" path="/var/lib/kubelet/pods/14d52143-6c70-4d37-9829-c6ce79b2b8ee/volumes" Nov 24 12:24:17 crc kubenswrapper[4782]: I1124 12:24:17.504310 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dc64c0-2cc8-4721-bb12-8723e6e6c6dd" path="/var/lib/kubelet/pods/19dc64c0-2cc8-4721-bb12-8723e6e6c6dd/volumes" Nov 24 12:24:30 crc kubenswrapper[4782]: I1124 12:24:30.491125 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:24:30 crc kubenswrapper[4782]: E1124 12:24:30.491992 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:24:31 crc kubenswrapper[4782]: I1124 12:24:31.033865 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dfw9r"] Nov 24 12:24:31 crc kubenswrapper[4782]: I1124 12:24:31.041618 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dfw9r"] Nov 24 12:24:31 crc kubenswrapper[4782]: I1124 12:24:31.501897 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e814aae-c22b-41ff-bf86-0cbe5a766eab" path="/var/lib/kubelet/pods/4e814aae-c22b-41ff-bf86-0cbe5a766eab/volumes" Nov 24 12:24:39 crc kubenswrapper[4782]: I1124 12:24:39.030882 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pv8t7"] Nov 24 12:24:39 crc kubenswrapper[4782]: I1124 12:24:39.037237 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pv8t7"] Nov 24 12:24:39 crc kubenswrapper[4782]: I1124 12:24:39.500939 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73188696-c109-46f8-985b-6f5e9ef5b787" path="/var/lib/kubelet/pods/73188696-c109-46f8-985b-6f5e9ef5b787/volumes" Nov 24 12:24:45 crc kubenswrapper[4782]: I1124 12:24:45.491208 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:24:45 crc kubenswrapper[4782]: E1124 12:24:45.491949 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:24:47 crc kubenswrapper[4782]: I1124 12:24:47.323622 4782 generic.go:334] "Generic (PLEG): container finished" podID="44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" containerID="a7140f17177437ba3e2fa684c0b908cfa0b05ad11c7d5581a505f04586f57329" exitCode=0 Nov 24 12:24:47 crc kubenswrapper[4782]: I1124 12:24:47.323722 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" event={"ID":"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7","Type":"ContainerDied","Data":"a7140f17177437ba3e2fa684c0b908cfa0b05ad11c7d5581a505f04586f57329"} Nov 24 12:24:48 crc kubenswrapper[4782]: I1124 12:24:48.861688 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:24:48 crc kubenswrapper[4782]: I1124 12:24:48.991632 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory\") pod \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " Nov 24 12:24:48 crc kubenswrapper[4782]: I1124 12:24:48.991852 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key\") pod \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " Nov 24 12:24:48 crc kubenswrapper[4782]: I1124 12:24:48.991943 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwx5k\" (UniqueName: \"kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k\") pod \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\" (UID: \"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7\") " Nov 24 12:24:48 crc kubenswrapper[4782]: I1124 12:24:48.996887 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k" (OuterVolumeSpecName: "kube-api-access-bwx5k") pod "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" (UID: "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7"). InnerVolumeSpecName "kube-api-access-bwx5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.020939 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" (UID: "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.036828 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory" (OuterVolumeSpecName: "inventory") pod "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" (UID: "44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.042980 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-c4k2r"] Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.049407 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-c4k2r"] Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.094519 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwx5k\" (UniqueName: \"kubernetes.io/projected/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-kube-api-access-bwx5k\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.094566 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.094579 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.345844 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" event={"ID":"44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7","Type":"ContainerDied","Data":"fe9faeda993bcce46554475bc7d722418cb93c377b0ad732fdfb6ecb0decec4e"} Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.345888 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe9faeda993bcce46554475bc7d722418cb93c377b0ad732fdfb6ecb0decec4e" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.345907 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ccf95" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.429169 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt"] Nov 24 12:24:49 crc kubenswrapper[4782]: E1124 12:24:49.429584 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.429603 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.429768 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.430421 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.432587 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.433094 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.434642 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.434848 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.445650 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt"] Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.509925 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e30cc3-dd65-45af-82ed-40354098a697" path="/var/lib/kubelet/pods/42e30cc3-dd65-45af-82ed-40354098a697/volumes" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.601851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.601919 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnx2\" (UniqueName: \"kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.602168 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.703318 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.703418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.703447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftnx2\" (UniqueName: \"kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.708022 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.708034 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.721311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftnx2\" (UniqueName: \"kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:49 crc kubenswrapper[4782]: I1124 12:24:49.783075 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:24:50 crc kubenswrapper[4782]: I1124 12:24:50.305344 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt"] Nov 24 12:24:50 crc kubenswrapper[4782]: W1124 12:24:50.307730 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod521af29a_8b28_4633_adc5_857ca14e0312.slice/crio-4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5 WatchSource:0}: Error finding container 4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5: Status 404 returned error can't find the container with id 4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5 Nov 24 12:24:50 crc kubenswrapper[4782]: I1124 12:24:50.359409 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" event={"ID":"521af29a-8b28-4633-adc5-857ca14e0312","Type":"ContainerStarted","Data":"4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5"} Nov 24 12:24:51 crc kubenswrapper[4782]: I1124 12:24:51.368772 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" event={"ID":"521af29a-8b28-4633-adc5-857ca14e0312","Type":"ContainerStarted","Data":"1592cee2b86ac8d7d356b0900cf42bf31d1ed649e87fdcc39c2f30efe9886843"} Nov 24 12:24:51 crc kubenswrapper[4782]: I1124 12:24:51.386746 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" podStartSLOduration=1.9610474020000002 podStartE2EDuration="2.386718637s" podCreationTimestamp="2025-11-24 12:24:49 +0000 UTC" firstStartedPulling="2025-11-24 12:24:50.309661134 +0000 UTC m=+1739.553494903" lastFinishedPulling="2025-11-24 12:24:50.735332369 +0000 UTC m=+1739.979166138" observedRunningTime="2025-11-24 12:24:51.384896707 +0000 UTC m=+1740.628730486" watchObservedRunningTime="2025-11-24 12:24:51.386718637 +0000 UTC m=+1740.630552406" Nov 24 12:24:58 crc kubenswrapper[4782]: I1124 12:24:58.490755 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:24:58 crc kubenswrapper[4782]: E1124 12:24:58.492692 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.330937 4782 scope.go:117] "RemoveContainer" containerID="8edc41d602a47fd694df6ba55cd1e42b6459adf724c2909fc7f6452d04590b73" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.357342 4782 scope.go:117] "RemoveContainer" containerID="0eb17af01c443abb5d317744ebbeeb45b2c7f9f955228287a39c10c5e37781d9" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.408785 4782 scope.go:117] "RemoveContainer" containerID="0ea2bc84c459d47ee1feae9ea9546d4f173bb942ae5e3a16e21caf912b056c8f" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.453286 4782 scope.go:117] "RemoveContainer" containerID="5d29ae5715bd38adf7086de32933e95566eb28a5bc75e7fc2621d447f44a67d1" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.488780 4782 scope.go:117] "RemoveContainer" containerID="5021bb4a51cda786f28f1d047b9f835406e4184b33c5fecacfed73e03ce2c28b" Nov 24 12:25:03 crc kubenswrapper[4782]: I1124 12:25:03.549952 4782 scope.go:117] "RemoveContainer" containerID="6d51d8793fb378702d4a1b4e38bab1e7c00a0bcee1e88b8045d2a0a11797e248" Nov 24 12:25:11 crc kubenswrapper[4782]: I1124 12:25:11.496186 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:25:11 crc kubenswrapper[4782]: E1124 12:25:11.497197 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:25:23 crc kubenswrapper[4782]: I1124 12:25:23.492046 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:25:23 crc kubenswrapper[4782]: E1124 12:25:23.493156 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:25:25 crc kubenswrapper[4782]: I1124 12:25:25.043837 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d520-account-create-brb7r"] Nov 24 12:25:25 crc kubenswrapper[4782]: I1124 12:25:25.054344 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d520-account-create-brb7r"] Nov 24 12:25:25 crc kubenswrapper[4782]: I1124 12:25:25.503047 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55557c05-cdcc-4fff-8ca2-6e616c2a1854" path="/var/lib/kubelet/pods/55557c05-cdcc-4fff-8ca2-6e616c2a1854/volumes" Nov 24 12:25:26 crc kubenswrapper[4782]: I1124 12:25:26.033413 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qs6zz"] Nov 24 12:25:26 crc kubenswrapper[4782]: I1124 12:25:26.046136 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mxg8x"] Nov 24 12:25:26 crc kubenswrapper[4782]: I1124 12:25:26.055846 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qs6zz"] Nov 24 12:25:26 crc kubenswrapper[4782]: I1124 12:25:26.064323 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mxg8x"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.032766 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d20a-account-create-cdvbc"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.040219 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gghr8"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.047249 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8a90-account-create-b5fwb"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.053686 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gghr8"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.059328 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8a90-account-create-b5fwb"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.065118 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d20a-account-create-cdvbc"] Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.505695 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a" path="/var/lib/kubelet/pods/0f3e6cee-e7b1-4f3b-ba6b-69812ee7629a/volumes" Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.506291 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e0a450-529e-4fed-a95f-c7a37b086f3b" path="/var/lib/kubelet/pods/82e0a450-529e-4fed-a95f-c7a37b086f3b/volumes" Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.506843 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f452b668-2d6c-4b6c-b0d7-053a7b908a24" path="/var/lib/kubelet/pods/f452b668-2d6c-4b6c-b0d7-053a7b908a24/volumes" Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.507401 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a6efa0-83b9-4dd4-b55b-d262cb88f536" path="/var/lib/kubelet/pods/f9a6efa0-83b9-4dd4-b55b-d262cb88f536/volumes" Nov 24 12:25:27 crc kubenswrapper[4782]: I1124 12:25:27.508455 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff281fed-4697-4e59-9d08-260214164d8e" path="/var/lib/kubelet/pods/ff281fed-4697-4e59-9d08-260214164d8e/volumes" Nov 24 12:25:37 crc kubenswrapper[4782]: I1124 12:25:37.491093 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:25:37 crc kubenswrapper[4782]: E1124 12:25:37.491925 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:25:49 crc kubenswrapper[4782]: I1124 12:25:49.491204 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:25:49 crc kubenswrapper[4782]: E1124 12:25:49.491880 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:26:00 crc kubenswrapper[4782]: I1124 12:26:00.490599 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:26:01 crc kubenswrapper[4782]: I1124 12:26:01.484225 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6"} Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.509338 4782 generic.go:334] "Generic (PLEG): container finished" podID="521af29a-8b28-4633-adc5-857ca14e0312" containerID="1592cee2b86ac8d7d356b0900cf42bf31d1ed649e87fdcc39c2f30efe9886843" exitCode=0 Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.510583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" event={"ID":"521af29a-8b28-4633-adc5-857ca14e0312","Type":"ContainerDied","Data":"1592cee2b86ac8d7d356b0900cf42bf31d1ed649e87fdcc39c2f30efe9886843"} Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.683518 4782 scope.go:117] "RemoveContainer" containerID="5c9bfbcde1deaaac144fb1b174965c004833fa8d28297b6f644c32878a823842" Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.716190 4782 scope.go:117] "RemoveContainer" containerID="d7c023fed0f961a554b1c0e012ef372ee467e23ee20cc22f28116ac12eb595ca" Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.749768 4782 scope.go:117] "RemoveContainer" containerID="b5bd8c4e6ca065b1bdfbbf3bb0a7280536730352599aec9313069eca5b97fb1c" Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.792910 4782 scope.go:117] "RemoveContainer" containerID="71594828c8cb799014de681d6c625eb7c9d7221cb107e6ff4b861f15beae6463" Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.833175 4782 scope.go:117] "RemoveContainer" containerID="15a310fadf4c98142fe767e8b6ea2b5050ab4515992fb4c4651d477d42ced434" Nov 24 12:26:03 crc kubenswrapper[4782]: I1124 12:26:03.871698 4782 scope.go:117] "RemoveContainer" containerID="95b3701023e2a219c933fc13bc42f7408e5233e3906599fc8a52dd25260dc333" Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.908857 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.939216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key\") pod \"521af29a-8b28-4633-adc5-857ca14e0312\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.939317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftnx2\" (UniqueName: \"kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2\") pod \"521af29a-8b28-4633-adc5-857ca14e0312\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.939407 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory\") pod \"521af29a-8b28-4633-adc5-857ca14e0312\" (UID: \"521af29a-8b28-4633-adc5-857ca14e0312\") " Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.945299 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2" (OuterVolumeSpecName: "kube-api-access-ftnx2") pod "521af29a-8b28-4633-adc5-857ca14e0312" (UID: "521af29a-8b28-4633-adc5-857ca14e0312"). InnerVolumeSpecName "kube-api-access-ftnx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.970772 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "521af29a-8b28-4633-adc5-857ca14e0312" (UID: "521af29a-8b28-4633-adc5-857ca14e0312"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:04 crc kubenswrapper[4782]: I1124 12:26:04.971038 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory" (OuterVolumeSpecName: "inventory") pod "521af29a-8b28-4633-adc5-857ca14e0312" (UID: "521af29a-8b28-4633-adc5-857ca14e0312"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.041792 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftnx2\" (UniqueName: \"kubernetes.io/projected/521af29a-8b28-4633-adc5-857ca14e0312-kube-api-access-ftnx2\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.041821 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.041830 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/521af29a-8b28-4633-adc5-857ca14e0312-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.530515 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" event={"ID":"521af29a-8b28-4633-adc5-857ca14e0312","Type":"ContainerDied","Data":"4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5"} Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.530560 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.530563 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0c5a001a45bc475e579342006e4ad9274161cc4221b550c600d106c7bc53c5" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.637759 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp"] Nov 24 12:26:05 crc kubenswrapper[4782]: E1124 12:26:05.638626 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521af29a-8b28-4633-adc5-857ca14e0312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.638644 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="521af29a-8b28-4633-adc5-857ca14e0312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.638829 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="521af29a-8b28-4633-adc5-857ca14e0312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.639457 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.642206 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.642533 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.642695 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.642774 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.661366 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp"] Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.686812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvhr\" (UniqueName: \"kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.686905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.686961 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.788240 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.788532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvhr\" (UniqueName: \"kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.788605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.793669 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.807492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.808850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvhr\" (UniqueName: \"kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:05 crc kubenswrapper[4782]: I1124 12:26:05.956858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:06 crc kubenswrapper[4782]: I1124 12:26:06.496275 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp"] Nov 24 12:26:06 crc kubenswrapper[4782]: I1124 12:26:06.541038 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" event={"ID":"c153f0a7-9375-40ea-9d60-aad9c960a30a","Type":"ContainerStarted","Data":"8f7534e38a6752d5aacea89e09be7cd04e8954f7aebbc1ced8a2feb0ac3174f3"} Nov 24 12:26:07 crc kubenswrapper[4782]: I1124 12:26:07.550028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" event={"ID":"c153f0a7-9375-40ea-9d60-aad9c960a30a","Type":"ContainerStarted","Data":"a8c0167f473231e50aa5e0dfacf4f3d2110522a7564b381c267e643016f0a15a"} Nov 24 12:26:07 crc kubenswrapper[4782]: I1124 12:26:07.573709 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" podStartSLOduration=2.022708362 podStartE2EDuration="2.573690832s" podCreationTimestamp="2025-11-24 12:26:05 +0000 UTC" firstStartedPulling="2025-11-24 12:26:06.503996859 +0000 UTC m=+1815.747830638" lastFinishedPulling="2025-11-24 12:26:07.054979339 +0000 UTC m=+1816.298813108" observedRunningTime="2025-11-24 12:26:07.566059475 +0000 UTC m=+1816.809893244" watchObservedRunningTime="2025-11-24 12:26:07.573690832 +0000 UTC m=+1816.817524601" Nov 24 12:26:12 crc kubenswrapper[4782]: I1124 12:26:12.596415 4782 generic.go:334] "Generic (PLEG): container finished" podID="c153f0a7-9375-40ea-9d60-aad9c960a30a" containerID="a8c0167f473231e50aa5e0dfacf4f3d2110522a7564b381c267e643016f0a15a" exitCode=0 Nov 24 12:26:12 crc kubenswrapper[4782]: I1124 12:26:12.596503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" event={"ID":"c153f0a7-9375-40ea-9d60-aad9c960a30a","Type":"ContainerDied","Data":"a8c0167f473231e50aa5e0dfacf4f3d2110522a7564b381c267e643016f0a15a"} Nov 24 12:26:13 crc kubenswrapper[4782]: I1124 12:26:13.974172 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.151663 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key\") pod \"c153f0a7-9375-40ea-9d60-aad9c960a30a\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.151932 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory\") pod \"c153f0a7-9375-40ea-9d60-aad9c960a30a\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.152139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xvhr\" (UniqueName: \"kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr\") pod \"c153f0a7-9375-40ea-9d60-aad9c960a30a\" (UID: \"c153f0a7-9375-40ea-9d60-aad9c960a30a\") " Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.164396 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr" (OuterVolumeSpecName: "kube-api-access-6xvhr") pod "c153f0a7-9375-40ea-9d60-aad9c960a30a" (UID: "c153f0a7-9375-40ea-9d60-aad9c960a30a"). InnerVolumeSpecName "kube-api-access-6xvhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.179465 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c153f0a7-9375-40ea-9d60-aad9c960a30a" (UID: "c153f0a7-9375-40ea-9d60-aad9c960a30a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.189562 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory" (OuterVolumeSpecName: "inventory") pod "c153f0a7-9375-40ea-9d60-aad9c960a30a" (UID: "c153f0a7-9375-40ea-9d60-aad9c960a30a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.254089 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.254131 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c153f0a7-9375-40ea-9d60-aad9c960a30a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.254148 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xvhr\" (UniqueName: \"kubernetes.io/projected/c153f0a7-9375-40ea-9d60-aad9c960a30a-kube-api-access-6xvhr\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.432343 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb"] Nov 24 12:26:14 crc kubenswrapper[4782]: E1124 12:26:14.432788 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c153f0a7-9375-40ea-9d60-aad9c960a30a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.432811 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c153f0a7-9375-40ea-9d60-aad9c960a30a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.433044 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c153f0a7-9375-40ea-9d60-aad9c960a30a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.433788 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.452512 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb"] Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.457549 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.457589 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq7rx\" (UniqueName: \"kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.457676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.562588 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.563034 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.563097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq7rx\" (UniqueName: \"kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.566833 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.567329 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.607200 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq7rx\" (UniqueName: \"kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w7gqb\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.622269 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" event={"ID":"c153f0a7-9375-40ea-9d60-aad9c960a30a","Type":"ContainerDied","Data":"8f7534e38a6752d5aacea89e09be7cd04e8954f7aebbc1ced8a2feb0ac3174f3"} Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.622312 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7534e38a6752d5aacea89e09be7cd04e8954f7aebbc1ced8a2feb0ac3174f3" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.622339 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp" Nov 24 12:26:14 crc kubenswrapper[4782]: I1124 12:26:14.752191 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:26:15 crc kubenswrapper[4782]: I1124 12:26:15.280465 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb"] Nov 24 12:26:15 crc kubenswrapper[4782]: I1124 12:26:15.634938 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" event={"ID":"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913","Type":"ContainerStarted","Data":"a2224565778bf9f23f52fb837b8ca32808db2e4fb1241ed59605f5c55adc390c"} Nov 24 12:26:16 crc kubenswrapper[4782]: I1124 12:26:16.645635 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" event={"ID":"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913","Type":"ContainerStarted","Data":"116df7e0be53640edbf63f563f5ee42125797c1b0adc02aa5a42f65cc1278f72"} Nov 24 12:26:16 crc kubenswrapper[4782]: I1124 12:26:16.663004 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" podStartSLOduration=2.233198029 podStartE2EDuration="2.662988086s" podCreationTimestamp="2025-11-24 12:26:14 +0000 UTC" firstStartedPulling="2025-11-24 12:26:15.297014692 +0000 UTC m=+1824.540848461" lastFinishedPulling="2025-11-24 12:26:15.726804759 +0000 UTC m=+1824.970638518" observedRunningTime="2025-11-24 12:26:16.661116815 +0000 UTC m=+1825.904950594" watchObservedRunningTime="2025-11-24 12:26:16.662988086 +0000 UTC m=+1825.906821855" Nov 24 12:26:25 crc kubenswrapper[4782]: I1124 12:26:25.052650 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h75bq"] Nov 24 12:26:25 crc kubenswrapper[4782]: I1124 12:26:25.056468 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h75bq"] Nov 24 12:26:25 crc kubenswrapper[4782]: I1124 12:26:25.499856 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c358e6ea-3b34-4ec1-ba92-dd7437ccaf37" path="/var/lib/kubelet/pods/c358e6ea-3b34-4ec1-ba92-dd7437ccaf37/volumes" Nov 24 12:26:53 crc kubenswrapper[4782]: I1124 12:26:53.047824 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hcqdz"] Nov 24 12:26:53 crc kubenswrapper[4782]: I1124 12:26:53.054640 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hcqdz"] Nov 24 12:26:53 crc kubenswrapper[4782]: I1124 12:26:53.503319 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fb7825-28ae-412d-b01b-98cb9f74c06e" path="/var/lib/kubelet/pods/73fb7825-28ae-412d-b01b-98cb9f74c06e/volumes" Nov 24 12:26:55 crc kubenswrapper[4782]: I1124 12:26:55.045498 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bbh5l"] Nov 24 12:26:55 crc kubenswrapper[4782]: I1124 12:26:55.060141 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bbh5l"] Nov 24 12:26:55 crc kubenswrapper[4782]: I1124 12:26:55.501905 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a12501-1d34-4a24-b2ad-4c932e61a478" path="/var/lib/kubelet/pods/84a12501-1d34-4a24-b2ad-4c932e61a478/volumes" Nov 24 12:26:59 crc kubenswrapper[4782]: I1124 12:26:59.029347 4782 generic.go:334] "Generic (PLEG): container finished" podID="ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" containerID="116df7e0be53640edbf63f563f5ee42125797c1b0adc02aa5a42f65cc1278f72" exitCode=0 Nov 24 12:26:59 crc kubenswrapper[4782]: I1124 12:26:59.029552 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" event={"ID":"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913","Type":"ContainerDied","Data":"116df7e0be53640edbf63f563f5ee42125797c1b0adc02aa5a42f65cc1278f72"} Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.426028 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.611060 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key\") pod \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.611184 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq7rx\" (UniqueName: \"kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx\") pod \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.611213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory\") pod \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\" (UID: \"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913\") " Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.622536 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx" (OuterVolumeSpecName: "kube-api-access-vq7rx") pod "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" (UID: "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913"). InnerVolumeSpecName "kube-api-access-vq7rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.635985 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory" (OuterVolumeSpecName: "inventory") pod "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" (UID: "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.637671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" (UID: "ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.714059 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq7rx\" (UniqueName: \"kubernetes.io/projected/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-kube-api-access-vq7rx\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.714107 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:00 crc kubenswrapper[4782]: I1124 12:27:00.714119 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.056687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" event={"ID":"ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913","Type":"ContainerDied","Data":"a2224565778bf9f23f52fb837b8ca32808db2e4fb1241ed59605f5c55adc390c"} Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.056997 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2224565778bf9f23f52fb837b8ca32808db2e4fb1241ed59605f5c55adc390c" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.056819 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w7gqb" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.139297 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln"] Nov 24 12:27:01 crc kubenswrapper[4782]: E1124 12:27:01.140005 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.140026 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.140248 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.141015 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.142836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.143445 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.143616 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.146151 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.157163 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln"] Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.225177 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6b5t\" (UniqueName: \"kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.225238 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.225331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.326713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6b5t\" (UniqueName: \"kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.326766 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.326879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.330347 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.331207 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.349277 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6b5t\" (UniqueName: \"kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s4tln\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.455761 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:27:01 crc kubenswrapper[4782]: I1124 12:27:01.968485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln"] Nov 24 12:27:02 crc kubenswrapper[4782]: I1124 12:27:02.067577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" event={"ID":"58220605-30a9-4d4f-b785-3e9edabcfb5c","Type":"ContainerStarted","Data":"4ae29d00af033320ec39253a71df2d478d4641a3f7ed90842ab9e2a3748a1dbd"} Nov 24 12:27:03 crc kubenswrapper[4782]: I1124 12:27:03.077480 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" event={"ID":"58220605-30a9-4d4f-b785-3e9edabcfb5c","Type":"ContainerStarted","Data":"77a34da048d4aeeca5e69165c6038f22cd48f348d8f8174e3e890cf8bdc82c37"} Nov 24 12:27:03 crc kubenswrapper[4782]: I1124 12:27:03.097553 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" podStartSLOduration=1.6639298660000001 podStartE2EDuration="2.097534107s" podCreationTimestamp="2025-11-24 12:27:01 +0000 UTC" firstStartedPulling="2025-11-24 12:27:01.974490914 +0000 UTC m=+1871.218324683" lastFinishedPulling="2025-11-24 12:27:02.408095155 +0000 UTC m=+1871.651928924" observedRunningTime="2025-11-24 12:27:03.092154651 +0000 UTC m=+1872.335988430" watchObservedRunningTime="2025-11-24 12:27:03.097534107 +0000 UTC m=+1872.341367876" Nov 24 12:27:04 crc kubenswrapper[4782]: I1124 12:27:04.028097 4782 scope.go:117] "RemoveContainer" containerID="c0937b103899b787db16e5c2944bf5ff8af9b48743c0588b359595a4eff78792" Nov 24 12:27:04 crc kubenswrapper[4782]: I1124 12:27:04.077993 4782 scope.go:117] "RemoveContainer" containerID="0173d54fc95aaabb0d8b6fa2bd4e1c12fab4c5a3f66cbea010d1369c3b5edc6f" Nov 24 12:27:04 crc kubenswrapper[4782]: I1124 12:27:04.117366 4782 scope.go:117] "RemoveContainer" containerID="549e8446cc35343fbfa718f9059dcb85e44b0a4cdfa044b27a2a486ca1a6c63a" Nov 24 12:27:37 crc kubenswrapper[4782]: I1124 12:27:37.051192 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9x4kt"] Nov 24 12:27:37 crc kubenswrapper[4782]: I1124 12:27:37.058949 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9x4kt"] Nov 24 12:27:37 crc kubenswrapper[4782]: I1124 12:27:37.502612 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803de2c6-2262-4959-8c9f-afc4a2da0196" path="/var/lib/kubelet/pods/803de2c6-2262-4959-8c9f-afc4a2da0196/volumes" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.539342 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.550301 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.550438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.702323 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chln7\" (UniqueName: \"kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.702680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.702715 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.725113 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.731495 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.735179 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.805157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chln7\" (UniqueName: \"kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.805257 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.805288 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.805859 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.806442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.833601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chln7\" (UniqueName: \"kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7\") pod \"redhat-marketplace-l24fb\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.874643 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.907507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrlb\" (UniqueName: \"kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.907654 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:42 crc kubenswrapper[4782]: I1124 12:27:42.907679 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.009544 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrlb\" (UniqueName: \"kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.010037 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.010064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.010888 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.010888 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.029595 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrlb\" (UniqueName: \"kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb\") pod \"certified-operators-62hzm\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.049995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.358110 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.392446 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:43 crc kubenswrapper[4782]: W1124 12:27:43.398229 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195bee6c_70d2_45ee_94da_8b5873d3cea0.slice/crio-577e79cfecc8316d38a6dfa27fcee41e9c5e0411f62ffc5e9405c680588b3bb0 WatchSource:0}: Error finding container 577e79cfecc8316d38a6dfa27fcee41e9c5e0411f62ffc5e9405c680588b3bb0: Status 404 returned error can't find the container with id 577e79cfecc8316d38a6dfa27fcee41e9c5e0411f62ffc5e9405c680588b3bb0 Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.419493 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerStarted","Data":"82465bf71f3e84050fd8b06de1db127060bc3effb97a1cd92b8f38d0e15ca5f8"} Nov 24 12:27:43 crc kubenswrapper[4782]: I1124 12:27:43.420972 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerStarted","Data":"577e79cfecc8316d38a6dfa27fcee41e9c5e0411f62ffc5e9405c680588b3bb0"} Nov 24 12:27:44 crc kubenswrapper[4782]: I1124 12:27:44.429986 4782 generic.go:334] "Generic (PLEG): container finished" podID="598d5a70-7591-41d3-942e-d1de738403ad" containerID="913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c" exitCode=0 Nov 24 12:27:44 crc kubenswrapper[4782]: I1124 12:27:44.430047 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerDied","Data":"913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c"} Nov 24 12:27:44 crc kubenswrapper[4782]: I1124 12:27:44.434577 4782 generic.go:334] "Generic (PLEG): container finished" podID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerID="741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713" exitCode=0 Nov 24 12:27:44 crc kubenswrapper[4782]: I1124 12:27:44.434676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerDied","Data":"741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713"} Nov 24 12:27:45 crc kubenswrapper[4782]: I1124 12:27:45.445026 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerStarted","Data":"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b"} Nov 24 12:27:45 crc kubenswrapper[4782]: I1124 12:27:45.448229 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerStarted","Data":"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d"} Nov 24 12:27:46 crc kubenswrapper[4782]: I1124 12:27:46.458401 4782 generic.go:334] "Generic (PLEG): container finished" podID="598d5a70-7591-41d3-942e-d1de738403ad" containerID="0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b" exitCode=0 Nov 24 12:27:46 crc kubenswrapper[4782]: I1124 12:27:46.458476 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerDied","Data":"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b"} Nov 24 12:27:47 crc kubenswrapper[4782]: I1124 12:27:47.468510 4782 generic.go:334] "Generic (PLEG): container finished" podID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerID="6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d" exitCode=0 Nov 24 12:27:47 crc kubenswrapper[4782]: I1124 12:27:47.468544 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerDied","Data":"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d"} Nov 24 12:27:47 crc kubenswrapper[4782]: I1124 12:27:47.471022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerStarted","Data":"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928"} Nov 24 12:27:47 crc kubenswrapper[4782]: I1124 12:27:47.516609 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l24fb" podStartSLOduration=3.040219037 podStartE2EDuration="5.516583698s" podCreationTimestamp="2025-11-24 12:27:42 +0000 UTC" firstStartedPulling="2025-11-24 12:27:44.432266769 +0000 UTC m=+1913.676100538" lastFinishedPulling="2025-11-24 12:27:46.90863143 +0000 UTC m=+1916.152465199" observedRunningTime="2025-11-24 12:27:47.512018944 +0000 UTC m=+1916.755852713" watchObservedRunningTime="2025-11-24 12:27:47.516583698 +0000 UTC m=+1916.760417467" Nov 24 12:27:48 crc kubenswrapper[4782]: I1124 12:27:48.483874 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerStarted","Data":"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499"} Nov 24 12:27:48 crc kubenswrapper[4782]: I1124 12:27:48.504417 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62hzm" podStartSLOduration=2.846693279 podStartE2EDuration="6.504390027s" podCreationTimestamp="2025-11-24 12:27:42 +0000 UTC" firstStartedPulling="2025-11-24 12:27:44.436182525 +0000 UTC m=+1913.680016284" lastFinishedPulling="2025-11-24 12:27:48.093879263 +0000 UTC m=+1917.337713032" observedRunningTime="2025-11-24 12:27:48.498958399 +0000 UTC m=+1917.742792168" watchObservedRunningTime="2025-11-24 12:27:48.504390027 +0000 UTC m=+1917.748223806" Nov 24 12:27:52 crc kubenswrapper[4782]: I1124 12:27:52.875534 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:52 crc kubenswrapper[4782]: I1124 12:27:52.876193 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:52 crc kubenswrapper[4782]: I1124 12:27:52.924233 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:53 crc kubenswrapper[4782]: I1124 12:27:53.050363 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:53 crc kubenswrapper[4782]: I1124 12:27:53.051232 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:53 crc kubenswrapper[4782]: I1124 12:27:53.094826 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:53 crc kubenswrapper[4782]: I1124 12:27:53.586816 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:53 crc kubenswrapper[4782]: I1124 12:27:53.594132 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:54 crc kubenswrapper[4782]: I1124 12:27:54.925250 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:55 crc kubenswrapper[4782]: I1124 12:27:55.551027 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62hzm" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="registry-server" containerID="cri-o://820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499" gracePeriod=2 Nov 24 12:27:55 crc kubenswrapper[4782]: I1124 12:27:55.916208 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:55 crc kubenswrapper[4782]: I1124 12:27:55.916991 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l24fb" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="registry-server" containerID="cri-o://e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928" gracePeriod=2 Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.128834 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.166766 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttrlb\" (UniqueName: \"kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb\") pod \"195bee6c-70d2-45ee-94da-8b5873d3cea0\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.167104 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content\") pod \"195bee6c-70d2-45ee-94da-8b5873d3cea0\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.167308 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities\") pod \"195bee6c-70d2-45ee-94da-8b5873d3cea0\" (UID: \"195bee6c-70d2-45ee-94da-8b5873d3cea0\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.167837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities" (OuterVolumeSpecName: "utilities") pod "195bee6c-70d2-45ee-94da-8b5873d3cea0" (UID: "195bee6c-70d2-45ee-94da-8b5873d3cea0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.168129 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.190341 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb" (OuterVolumeSpecName: "kube-api-access-ttrlb") pod "195bee6c-70d2-45ee-94da-8b5873d3cea0" (UID: "195bee6c-70d2-45ee-94da-8b5873d3cea0"). InnerVolumeSpecName "kube-api-access-ttrlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.243018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "195bee6c-70d2-45ee-94da-8b5873d3cea0" (UID: "195bee6c-70d2-45ee-94da-8b5873d3cea0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.272124 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/195bee6c-70d2-45ee-94da-8b5873d3cea0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.272162 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttrlb\" (UniqueName: \"kubernetes.io/projected/195bee6c-70d2-45ee-94da-8b5873d3cea0-kube-api-access-ttrlb\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.276919 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.373643 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities\") pod \"598d5a70-7591-41d3-942e-d1de738403ad\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.373829 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content\") pod \"598d5a70-7591-41d3-942e-d1de738403ad\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.373953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chln7\" (UniqueName: \"kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7\") pod \"598d5a70-7591-41d3-942e-d1de738403ad\" (UID: \"598d5a70-7591-41d3-942e-d1de738403ad\") " Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.374154 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities" (OuterVolumeSpecName: "utilities") pod "598d5a70-7591-41d3-942e-d1de738403ad" (UID: "598d5a70-7591-41d3-942e-d1de738403ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.375170 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.379105 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7" (OuterVolumeSpecName: "kube-api-access-chln7") pod "598d5a70-7591-41d3-942e-d1de738403ad" (UID: "598d5a70-7591-41d3-942e-d1de738403ad"). InnerVolumeSpecName "kube-api-access-chln7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.392246 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "598d5a70-7591-41d3-942e-d1de738403ad" (UID: "598d5a70-7591-41d3-942e-d1de738403ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.477196 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598d5a70-7591-41d3-942e-d1de738403ad-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.477501 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chln7\" (UniqueName: \"kubernetes.io/projected/598d5a70-7591-41d3-942e-d1de738403ad-kube-api-access-chln7\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.564704 4782 generic.go:334] "Generic (PLEG): container finished" podID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerID="820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499" exitCode=0 Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.564754 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62hzm" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.564769 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerDied","Data":"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499"} Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.564807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62hzm" event={"ID":"195bee6c-70d2-45ee-94da-8b5873d3cea0","Type":"ContainerDied","Data":"577e79cfecc8316d38a6dfa27fcee41e9c5e0411f62ffc5e9405c680588b3bb0"} Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.564826 4782 scope.go:117] "RemoveContainer" containerID="820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.570449 4782 generic.go:334] "Generic (PLEG): container finished" podID="598d5a70-7591-41d3-942e-d1de738403ad" containerID="e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928" exitCode=0 Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.570538 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l24fb" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.570619 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerDied","Data":"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928"} Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.570726 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l24fb" event={"ID":"598d5a70-7591-41d3-942e-d1de738403ad","Type":"ContainerDied","Data":"82465bf71f3e84050fd8b06de1db127060bc3effb97a1cd92b8f38d0e15ca5f8"} Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.594703 4782 scope.go:117] "RemoveContainer" containerID="6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.608923 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.617411 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62hzm"] Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.631351 4782 scope.go:117] "RemoveContainer" containerID="741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.631512 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.642717 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l24fb"] Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.656136 4782 scope.go:117] "RemoveContainer" containerID="820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.656574 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499\": container with ID starting with 820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499 not found: ID does not exist" containerID="820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.656610 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499"} err="failed to get container status \"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499\": rpc error: code = NotFound desc = could not find container \"820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499\": container with ID starting with 820fdd24a03929e02f8219a868c78bd51a9652a3fc8dd07e71da6b30dfc79499 not found: ID does not exist" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.656638 4782 scope.go:117] "RemoveContainer" containerID="6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.656989 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d\": container with ID starting with 6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d not found: ID does not exist" containerID="6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.657013 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d"} err="failed to get container status \"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d\": rpc error: code = NotFound desc = could not find container \"6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d\": container with ID starting with 6ef8274f6dbfd671f67180e95faa4190504ffa55d4e6f1d0e76a641c35398a7d not found: ID does not exist" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.657037 4782 scope.go:117] "RemoveContainer" containerID="741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.657608 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713\": container with ID starting with 741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713 not found: ID does not exist" containerID="741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.657646 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713"} err="failed to get container status \"741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713\": rpc error: code = NotFound desc = could not find container \"741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713\": container with ID starting with 741aa2a95498282bf006d74dd798323a799d9d4e87e93375d164c0a9d9d08713 not found: ID does not exist" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.657672 4782 scope.go:117] "RemoveContainer" containerID="e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.717212 4782 scope.go:117] "RemoveContainer" containerID="0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.735672 4782 scope.go:117] "RemoveContainer" containerID="913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.806798 4782 scope.go:117] "RemoveContainer" containerID="e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.807319 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928\": container with ID starting with e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928 not found: ID does not exist" containerID="e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.807349 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928"} err="failed to get container status \"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928\": rpc error: code = NotFound desc = could not find container \"e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928\": container with ID starting with e961b911318c4ce0a82b85189526175c3a123859b1852793c68c3bdb2af68928 not found: ID does not exist" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.807384 4782 scope.go:117] "RemoveContainer" containerID="0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.807711 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b\": container with ID starting with 0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b not found: ID does not exist" containerID="0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.807748 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b"} err="failed to get container status \"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b\": rpc error: code = NotFound desc = could not find container \"0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b\": container with ID starting with 0927d36a9575ba59b782d9f94645d0a05752212497313f05aadcd3f9dad0670b not found: ID does not exist" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.807762 4782 scope.go:117] "RemoveContainer" containerID="913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c" Nov 24 12:27:56 crc kubenswrapper[4782]: E1124 12:27:56.808109 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c\": container with ID starting with 913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c not found: ID does not exist" containerID="913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c" Nov 24 12:27:56 crc kubenswrapper[4782]: I1124 12:27:56.808129 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c"} err="failed to get container status \"913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c\": rpc error: code = NotFound desc = could not find container \"913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c\": container with ID starting with 913aa408ad19ab360ee1e83f3d63b7e2a0d4a3baa404f2a281779fd699c47d7c not found: ID does not exist" Nov 24 12:27:57 crc kubenswrapper[4782]: I1124 12:27:57.516720 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" path="/var/lib/kubelet/pods/195bee6c-70d2-45ee-94da-8b5873d3cea0/volumes" Nov 24 12:27:57 crc kubenswrapper[4782]: I1124 12:27:57.517509 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598d5a70-7591-41d3-942e-d1de738403ad" path="/var/lib/kubelet/pods/598d5a70-7591-41d3-942e-d1de738403ad/volumes" Nov 24 12:28:00 crc kubenswrapper[4782]: I1124 12:28:00.410668 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:28:00 crc kubenswrapper[4782]: I1124 12:28:00.411605 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:28:00 crc kubenswrapper[4782]: I1124 12:28:00.613002 4782 generic.go:334] "Generic (PLEG): container finished" podID="58220605-30a9-4d4f-b785-3e9edabcfb5c" containerID="77a34da048d4aeeca5e69165c6038f22cd48f348d8f8174e3e890cf8bdc82c37" exitCode=0 Nov 24 12:28:00 crc kubenswrapper[4782]: I1124 12:28:00.613085 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" event={"ID":"58220605-30a9-4d4f-b785-3e9edabcfb5c","Type":"ContainerDied","Data":"77a34da048d4aeeca5e69165c6038f22cd48f348d8f8174e3e890cf8bdc82c37"} Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.091647 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.179421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key\") pod \"58220605-30a9-4d4f-b785-3e9edabcfb5c\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.179655 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory\") pod \"58220605-30a9-4d4f-b785-3e9edabcfb5c\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.179735 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6b5t\" (UniqueName: \"kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t\") pod \"58220605-30a9-4d4f-b785-3e9edabcfb5c\" (UID: \"58220605-30a9-4d4f-b785-3e9edabcfb5c\") " Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.185582 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t" (OuterVolumeSpecName: "kube-api-access-n6b5t") pod "58220605-30a9-4d4f-b785-3e9edabcfb5c" (UID: "58220605-30a9-4d4f-b785-3e9edabcfb5c"). InnerVolumeSpecName "kube-api-access-n6b5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.203591 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory" (OuterVolumeSpecName: "inventory") pod "58220605-30a9-4d4f-b785-3e9edabcfb5c" (UID: "58220605-30a9-4d4f-b785-3e9edabcfb5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.210338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58220605-30a9-4d4f-b785-3e9edabcfb5c" (UID: "58220605-30a9-4d4f-b785-3e9edabcfb5c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.282596 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.282784 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6b5t\" (UniqueName: \"kubernetes.io/projected/58220605-30a9-4d4f-b785-3e9edabcfb5c-kube-api-access-n6b5t\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.282840 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58220605-30a9-4d4f-b785-3e9edabcfb5c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.629295 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" event={"ID":"58220605-30a9-4d4f-b785-3e9edabcfb5c","Type":"ContainerDied","Data":"4ae29d00af033320ec39253a71df2d478d4641a3f7ed90842ab9e2a3748a1dbd"} Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.629833 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae29d00af033320ec39253a71df2d478d4641a3f7ed90842ab9e2a3748a1dbd" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.629425 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s4tln" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.728334 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6bsqg"] Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.728954 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="extract-content" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.728978 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="extract-content" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.728995 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="extract-content" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729003 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="extract-content" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.729020 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58220605-30a9-4d4f-b785-3e9edabcfb5c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729030 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="58220605-30a9-4d4f-b785-3e9edabcfb5c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.729046 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729055 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.729087 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="extract-utilities" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729095 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="extract-utilities" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.729107 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="extract-utilities" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729114 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="extract-utilities" Nov 24 12:28:02 crc kubenswrapper[4782]: E1124 12:28:02.729133 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729140 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729394 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="598d5a70-7591-41d3-942e-d1de738403ad" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729415 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="58220605-30a9-4d4f-b785-3e9edabcfb5c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.729538 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="195bee6c-70d2-45ee-94da-8b5873d3cea0" containerName="registry-server" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.731514 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.733499 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.734601 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.735744 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.735849 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.742101 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6bsqg"] Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.793524 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.793594 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.793625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchs6\" (UniqueName: \"kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.895587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.896511 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchs6\" (UniqueName: \"kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.896789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.899784 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.899905 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:02 crc kubenswrapper[4782]: I1124 12:28:02.916956 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchs6\" (UniqueName: \"kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6\") pod \"ssh-known-hosts-edpm-deployment-6bsqg\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:03 crc kubenswrapper[4782]: I1124 12:28:03.049010 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:03 crc kubenswrapper[4782]: I1124 12:28:03.647442 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6bsqg"] Nov 24 12:28:03 crc kubenswrapper[4782]: I1124 12:28:03.670675 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:28:04 crc kubenswrapper[4782]: I1124 12:28:04.229539 4782 scope.go:117] "RemoveContainer" containerID="35c5577736e0ff83143a7d724fb03fde38db5cd2427e79526badfd1d61442aaf" Nov 24 12:28:04 crc kubenswrapper[4782]: I1124 12:28:04.654739 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" event={"ID":"c8f27e6b-2964-4a8b-b976-92fb6421705a","Type":"ContainerStarted","Data":"8f434d500787cae1a469c3e2e580a5915cc5538123f6101160a31cbf68743b14"} Nov 24 12:28:04 crc kubenswrapper[4782]: I1124 12:28:04.654784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" event={"ID":"c8f27e6b-2964-4a8b-b976-92fb6421705a","Type":"ContainerStarted","Data":"4905c95f9f279ea71fb8beb014c39c38ba0a9d83891a98e2ab07bb3adb9033b9"} Nov 24 12:28:04 crc kubenswrapper[4782]: I1124 12:28:04.674831 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" podStartSLOduration=2.270041533 podStartE2EDuration="2.674811401s" podCreationTimestamp="2025-11-24 12:28:02 +0000 UTC" firstStartedPulling="2025-11-24 12:28:03.670481044 +0000 UTC m=+1932.914314813" lastFinishedPulling="2025-11-24 12:28:04.075250912 +0000 UTC m=+1933.319084681" observedRunningTime="2025-11-24 12:28:04.668676244 +0000 UTC m=+1933.912510023" watchObservedRunningTime="2025-11-24 12:28:04.674811401 +0000 UTC m=+1933.918645170" Nov 24 12:28:12 crc kubenswrapper[4782]: I1124 12:28:12.721762 4782 generic.go:334] "Generic (PLEG): container finished" podID="c8f27e6b-2964-4a8b-b976-92fb6421705a" containerID="8f434d500787cae1a469c3e2e580a5915cc5538123f6101160a31cbf68743b14" exitCode=0 Nov 24 12:28:12 crc kubenswrapper[4782]: I1124 12:28:12.722346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" event={"ID":"c8f27e6b-2964-4a8b-b976-92fb6421705a","Type":"ContainerDied","Data":"8f434d500787cae1a469c3e2e580a5915cc5538123f6101160a31cbf68743b14"} Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.137514 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.202133 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam\") pod \"c8f27e6b-2964-4a8b-b976-92fb6421705a\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.202214 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0\") pod \"c8f27e6b-2964-4a8b-b976-92fb6421705a\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.202341 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchs6\" (UniqueName: \"kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6\") pod \"c8f27e6b-2964-4a8b-b976-92fb6421705a\" (UID: \"c8f27e6b-2964-4a8b-b976-92fb6421705a\") " Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.209473 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6" (OuterVolumeSpecName: "kube-api-access-zchs6") pod "c8f27e6b-2964-4a8b-b976-92fb6421705a" (UID: "c8f27e6b-2964-4a8b-b976-92fb6421705a"). InnerVolumeSpecName "kube-api-access-zchs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.233164 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c8f27e6b-2964-4a8b-b976-92fb6421705a" (UID: "c8f27e6b-2964-4a8b-b976-92fb6421705a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.234497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c8f27e6b-2964-4a8b-b976-92fb6421705a" (UID: "c8f27e6b-2964-4a8b-b976-92fb6421705a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.305024 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.305059 4782 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8f27e6b-2964-4a8b-b976-92fb6421705a-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.305068 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchs6\" (UniqueName: \"kubernetes.io/projected/c8f27e6b-2964-4a8b-b976-92fb6421705a-kube-api-access-zchs6\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.738941 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" event={"ID":"c8f27e6b-2964-4a8b-b976-92fb6421705a","Type":"ContainerDied","Data":"4905c95f9f279ea71fb8beb014c39c38ba0a9d83891a98e2ab07bb3adb9033b9"} Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.738988 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4905c95f9f279ea71fb8beb014c39c38ba0a9d83891a98e2ab07bb3adb9033b9" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.739063 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6bsqg" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.832751 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d"] Nov 24 12:28:14 crc kubenswrapper[4782]: E1124 12:28:14.833232 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f27e6b-2964-4a8b-b976-92fb6421705a" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.833252 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f27e6b-2964-4a8b-b976-92fb6421705a" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.833536 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f27e6b-2964-4a8b-b976-92fb6421705a" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.834321 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.837428 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.837599 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.841062 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.845338 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d"] Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.849202 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.917845 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.917911 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4prjb\" (UniqueName: \"kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:14 crc kubenswrapper[4782]: I1124 12:28:14.918073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.020160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.020660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.020743 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4prjb\" (UniqueName: \"kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.026930 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.028521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.044280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4prjb\" (UniqueName: \"kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6c48d\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.164640 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.688714 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d"] Nov 24 12:28:15 crc kubenswrapper[4782]: I1124 12:28:15.747245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" event={"ID":"b11f38fd-d0b3-4272-8c87-921c1d40b832","Type":"ContainerStarted","Data":"d8c62caff932f7e80e7e1082bf0bb845eabafc228250882cfb8630b69ae9b636"} Nov 24 12:28:16 crc kubenswrapper[4782]: I1124 12:28:16.757334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" event={"ID":"b11f38fd-d0b3-4272-8c87-921c1d40b832","Type":"ContainerStarted","Data":"53aab8b06809f101ae58655e4e76bfee51d5ed58ee5f71d0c35068a046c4b37a"} Nov 24 12:28:16 crc kubenswrapper[4782]: I1124 12:28:16.777592 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" podStartSLOduration=2.331940703 podStartE2EDuration="2.77756961s" podCreationTimestamp="2025-11-24 12:28:14 +0000 UTC" firstStartedPulling="2025-11-24 12:28:15.685738405 +0000 UTC m=+1944.929572174" lastFinishedPulling="2025-11-24 12:28:16.131367312 +0000 UTC m=+1945.375201081" observedRunningTime="2025-11-24 12:28:16.771676239 +0000 UTC m=+1946.015510008" watchObservedRunningTime="2025-11-24 12:28:16.77756961 +0000 UTC m=+1946.021403389" Nov 24 12:28:25 crc kubenswrapper[4782]: I1124 12:28:25.848825 4782 generic.go:334] "Generic (PLEG): container finished" podID="b11f38fd-d0b3-4272-8c87-921c1d40b832" containerID="53aab8b06809f101ae58655e4e76bfee51d5ed58ee5f71d0c35068a046c4b37a" exitCode=0 Nov 24 12:28:25 crc kubenswrapper[4782]: I1124 12:28:25.848926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" event={"ID":"b11f38fd-d0b3-4272-8c87-921c1d40b832","Type":"ContainerDied","Data":"53aab8b06809f101ae58655e4e76bfee51d5ed58ee5f71d0c35068a046c4b37a"} Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.279238 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.360659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory\") pod \"b11f38fd-d0b3-4272-8c87-921c1d40b832\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.360827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4prjb\" (UniqueName: \"kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb\") pod \"b11f38fd-d0b3-4272-8c87-921c1d40b832\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.360970 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key\") pod \"b11f38fd-d0b3-4272-8c87-921c1d40b832\" (UID: \"b11f38fd-d0b3-4272-8c87-921c1d40b832\") " Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.368066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb" (OuterVolumeSpecName: "kube-api-access-4prjb") pod "b11f38fd-d0b3-4272-8c87-921c1d40b832" (UID: "b11f38fd-d0b3-4272-8c87-921c1d40b832"). InnerVolumeSpecName "kube-api-access-4prjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.388586 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory" (OuterVolumeSpecName: "inventory") pod "b11f38fd-d0b3-4272-8c87-921c1d40b832" (UID: "b11f38fd-d0b3-4272-8c87-921c1d40b832"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.390150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b11f38fd-d0b3-4272-8c87-921c1d40b832" (UID: "b11f38fd-d0b3-4272-8c87-921c1d40b832"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.463502 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.463539 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4prjb\" (UniqueName: \"kubernetes.io/projected/b11f38fd-d0b3-4272-8c87-921c1d40b832-kube-api-access-4prjb\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.463552 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b11f38fd-d0b3-4272-8c87-921c1d40b832-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.864932 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" event={"ID":"b11f38fd-d0b3-4272-8c87-921c1d40b832","Type":"ContainerDied","Data":"d8c62caff932f7e80e7e1082bf0bb845eabafc228250882cfb8630b69ae9b636"} Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.864969 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c62caff932f7e80e7e1082bf0bb845eabafc228250882cfb8630b69ae9b636" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.865275 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6c48d" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.934750 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk"] Nov 24 12:28:27 crc kubenswrapper[4782]: E1124 12:28:27.935170 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11f38fd-d0b3-4272-8c87-921c1d40b832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.935187 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11f38fd-d0b3-4272-8c87-921c1d40b832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.935344 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11f38fd-d0b3-4272-8c87-921c1d40b832" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.935975 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.937784 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.937818 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.943954 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.953257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk"] Nov 24 12:28:27 crc kubenswrapper[4782]: I1124 12:28:27.953733 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.075657 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.075771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hhn\" (UniqueName: \"kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.075797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.177483 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hhn\" (UniqueName: \"kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.177542 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.177664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.182974 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.183146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.201118 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hhn\" (UniqueName: \"kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.255706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.773418 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk"] Nov 24 12:28:28 crc kubenswrapper[4782]: I1124 12:28:28.876291 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" event={"ID":"f27d1d98-ecfa-4977-aa6c-abf87b9e244a","Type":"ContainerStarted","Data":"f5a47b2faee3f36a652a00a0129f3b6c99e6638ecdd47760ce5daee235b4e510"} Nov 24 12:28:29 crc kubenswrapper[4782]: I1124 12:28:29.884843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" event={"ID":"f27d1d98-ecfa-4977-aa6c-abf87b9e244a","Type":"ContainerStarted","Data":"94967f891b37fa88f1d50274d363e13dbad0cc4319ce8e423ef5cfc5140a0fa8"} Nov 24 12:28:29 crc kubenswrapper[4782]: I1124 12:28:29.908098 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" podStartSLOduration=2.238933523 podStartE2EDuration="2.908079276s" podCreationTimestamp="2025-11-24 12:28:27 +0000 UTC" firstStartedPulling="2025-11-24 12:28:28.783075688 +0000 UTC m=+1958.026909457" lastFinishedPulling="2025-11-24 12:28:29.452221421 +0000 UTC m=+1958.696055210" observedRunningTime="2025-11-24 12:28:29.905882078 +0000 UTC m=+1959.149715857" watchObservedRunningTime="2025-11-24 12:28:29.908079276 +0000 UTC m=+1959.151913045" Nov 24 12:28:30 crc kubenswrapper[4782]: I1124 12:28:30.411287 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:28:30 crc kubenswrapper[4782]: I1124 12:28:30.411354 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:28:39 crc kubenswrapper[4782]: E1124 12:28:39.634783 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf27d1d98_ecfa_4977_aa6c_abf87b9e244a.slice/crio-94967f891b37fa88f1d50274d363e13dbad0cc4319ce8e423ef5cfc5140a0fa8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf27d1d98_ecfa_4977_aa6c_abf87b9e244a.slice/crio-conmon-94967f891b37fa88f1d50274d363e13dbad0cc4319ce8e423ef5cfc5140a0fa8.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:28:40 crc kubenswrapper[4782]: I1124 12:28:40.003653 4782 generic.go:334] "Generic (PLEG): container finished" podID="f27d1d98-ecfa-4977-aa6c-abf87b9e244a" containerID="94967f891b37fa88f1d50274d363e13dbad0cc4319ce8e423ef5cfc5140a0fa8" exitCode=0 Nov 24 12:28:40 crc kubenswrapper[4782]: I1124 12:28:40.003946 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" event={"ID":"f27d1d98-ecfa-4977-aa6c-abf87b9e244a","Type":"ContainerDied","Data":"94967f891b37fa88f1d50274d363e13dbad0cc4319ce8e423ef5cfc5140a0fa8"} Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.497304 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.661204 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key\") pod \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.661289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory\") pod \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.661560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hhn\" (UniqueName: \"kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn\") pod \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\" (UID: \"f27d1d98-ecfa-4977-aa6c-abf87b9e244a\") " Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.686566 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn" (OuterVolumeSpecName: "kube-api-access-b8hhn") pod "f27d1d98-ecfa-4977-aa6c-abf87b9e244a" (UID: "f27d1d98-ecfa-4977-aa6c-abf87b9e244a"). InnerVolumeSpecName "kube-api-access-b8hhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.701686 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f27d1d98-ecfa-4977-aa6c-abf87b9e244a" (UID: "f27d1d98-ecfa-4977-aa6c-abf87b9e244a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.702491 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory" (OuterVolumeSpecName: "inventory") pod "f27d1d98-ecfa-4977-aa6c-abf87b9e244a" (UID: "f27d1d98-ecfa-4977-aa6c-abf87b9e244a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.763786 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8hhn\" (UniqueName: \"kubernetes.io/projected/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-kube-api-access-b8hhn\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.763811 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:41 crc kubenswrapper[4782]: I1124 12:28:41.763820 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f27d1d98-ecfa-4977-aa6c-abf87b9e244a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.030746 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" event={"ID":"f27d1d98-ecfa-4977-aa6c-abf87b9e244a","Type":"ContainerDied","Data":"f5a47b2faee3f36a652a00a0129f3b6c99e6638ecdd47760ce5daee235b4e510"} Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.031282 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a47b2faee3f36a652a00a0129f3b6c99e6638ecdd47760ce5daee235b4e510" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.031506 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.143502 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859"] Nov 24 12:28:42 crc kubenswrapper[4782]: E1124 12:28:42.143941 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27d1d98-ecfa-4977-aa6c-abf87b9e244a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.143967 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27d1d98-ecfa-4977-aa6c-abf87b9e244a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.144176 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27d1d98-ecfa-4977-aa6c-abf87b9e244a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.144954 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.149447 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.149871 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.149965 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.150086 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.150155 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.150318 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.150506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.152938 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.175635 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859"] Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275645 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275703 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275740 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275921 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.275968 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276033 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276167 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276201 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.276225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6g5t\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379288 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379463 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379512 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379708 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6g5t\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379803 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379847 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379938 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.379993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.380057 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.380117 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.380191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.384416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.385236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.385854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.386136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.386907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.387315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.388925 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.389185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.390547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.392435 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.392765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.393733 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.394854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.402604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6g5t\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-w9859\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:42 crc kubenswrapper[4782]: I1124 12:28:42.463723 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:28:43 crc kubenswrapper[4782]: I1124 12:28:43.012514 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859"] Nov 24 12:28:43 crc kubenswrapper[4782]: I1124 12:28:43.039929 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" event={"ID":"e7525d3d-3415-44de-a76a-e6de73a7dc1f","Type":"ContainerStarted","Data":"f846d319f2ce6810555726409fa6e7f2709da5a38f7472975e5377a8ba02eab2"} Nov 24 12:28:44 crc kubenswrapper[4782]: I1124 12:28:44.050269 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" event={"ID":"e7525d3d-3415-44de-a76a-e6de73a7dc1f","Type":"ContainerStarted","Data":"10c96bc3886a7e0a02cfbc894124ec91ef30193d5a69b401b3fad11d3ab07232"} Nov 24 12:28:44 crc kubenswrapper[4782]: I1124 12:28:44.069666 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" podStartSLOduration=1.620629853 podStartE2EDuration="2.069648527s" podCreationTimestamp="2025-11-24 12:28:42 +0000 UTC" firstStartedPulling="2025-11-24 12:28:43.021682663 +0000 UTC m=+1972.265516432" lastFinishedPulling="2025-11-24 12:28:43.470701317 +0000 UTC m=+1972.714535106" observedRunningTime="2025-11-24 12:28:44.068949389 +0000 UTC m=+1973.312783168" watchObservedRunningTime="2025-11-24 12:28:44.069648527 +0000 UTC m=+1973.313482296" Nov 24 12:29:00 crc kubenswrapper[4782]: I1124 12:29:00.410826 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:29:00 crc kubenswrapper[4782]: I1124 12:29:00.411792 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:29:00 crc kubenswrapper[4782]: I1124 12:29:00.411858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:29:00 crc kubenswrapper[4782]: I1124 12:29:00.412713 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:29:00 crc kubenswrapper[4782]: I1124 12:29:00.412774 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6" gracePeriod=600 Nov 24 12:29:01 crc kubenswrapper[4782]: I1124 12:29:01.193762 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6" exitCode=0 Nov 24 12:29:01 crc kubenswrapper[4782]: I1124 12:29:01.193851 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6"} Nov 24 12:29:01 crc kubenswrapper[4782]: I1124 12:29:01.194150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059"} Nov 24 12:29:01 crc kubenswrapper[4782]: I1124 12:29:01.194178 4782 scope.go:117] "RemoveContainer" containerID="3ac5bf236c289d3298c8587513fa1f1367ba67bd27bebeaadb01ad587c33b0ac" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.146601 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.149343 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.173444 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.298054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.298194 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cblk\" (UniqueName: \"kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.298231 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.400116 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.400341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.400493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cblk\" (UniqueName: \"kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.400764 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.400948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.423597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cblk\" (UniqueName: \"kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk\") pod \"redhat-operators-hxcwl\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.469671 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:11 crc kubenswrapper[4782]: I1124 12:29:11.976409 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:12 crc kubenswrapper[4782]: I1124 12:29:12.293423 4782 generic.go:334] "Generic (PLEG): container finished" podID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerID="45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e" exitCode=0 Nov 24 12:29:12 crc kubenswrapper[4782]: I1124 12:29:12.293477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerDied","Data":"45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e"} Nov 24 12:29:12 crc kubenswrapper[4782]: I1124 12:29:12.293501 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerStarted","Data":"6c1456e4bb714f3e43809024d0790001f30cdd8bfdb8f04b54d9915dc3cde67f"} Nov 24 12:29:13 crc kubenswrapper[4782]: I1124 12:29:13.304881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerStarted","Data":"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f"} Nov 24 12:29:17 crc kubenswrapper[4782]: I1124 12:29:17.339729 4782 generic.go:334] "Generic (PLEG): container finished" podID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerID="dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f" exitCode=0 Nov 24 12:29:17 crc kubenswrapper[4782]: I1124 12:29:17.339775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerDied","Data":"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f"} Nov 24 12:29:18 crc kubenswrapper[4782]: I1124 12:29:18.353902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerStarted","Data":"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e"} Nov 24 12:29:18 crc kubenswrapper[4782]: I1124 12:29:18.375401 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxcwl" podStartSLOduration=1.918476842 podStartE2EDuration="7.375364147s" podCreationTimestamp="2025-11-24 12:29:11 +0000 UTC" firstStartedPulling="2025-11-24 12:29:12.294784824 +0000 UTC m=+2001.538618593" lastFinishedPulling="2025-11-24 12:29:17.751672129 +0000 UTC m=+2006.995505898" observedRunningTime="2025-11-24 12:29:18.369316387 +0000 UTC m=+2007.613150156" watchObservedRunningTime="2025-11-24 12:29:18.375364147 +0000 UTC m=+2007.619197916" Nov 24 12:29:21 crc kubenswrapper[4782]: I1124 12:29:21.470403 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:21 crc kubenswrapper[4782]: I1124 12:29:21.471002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:22 crc kubenswrapper[4782]: I1124 12:29:22.519933 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxcwl" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="registry-server" probeResult="failure" output=< Nov 24 12:29:22 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:29:22 crc kubenswrapper[4782]: > Nov 24 12:29:26 crc kubenswrapper[4782]: I1124 12:29:26.416067 4782 generic.go:334] "Generic (PLEG): container finished" podID="e7525d3d-3415-44de-a76a-e6de73a7dc1f" containerID="10c96bc3886a7e0a02cfbc894124ec91ef30193d5a69b401b3fad11d3ab07232" exitCode=0 Nov 24 12:29:26 crc kubenswrapper[4782]: I1124 12:29:26.416161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" event={"ID":"e7525d3d-3415-44de-a76a-e6de73a7dc1f","Type":"ContainerDied","Data":"10c96bc3886a7e0a02cfbc894124ec91ef30193d5a69b401b3fad11d3ab07232"} Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.834851 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.905778 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6g5t\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.905928 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.905956 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.905992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906040 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906108 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906169 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906194 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906305 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906405 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906493 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.906523 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\" (UID: \"e7525d3d-3415-44de-a76a-e6de73a7dc1f\") " Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.913038 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.913059 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.914469 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t" (OuterVolumeSpecName: "kube-api-access-h6g5t") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "kube-api-access-h6g5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.915413 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.918693 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.918768 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.921963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.922277 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.922550 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.922740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.923534 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.925434 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.944722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory" (OuterVolumeSpecName: "inventory") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:27 crc kubenswrapper[4782]: I1124 12:29:27.954688 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e7525d3d-3415-44de-a76a-e6de73a7dc1f" (UID: "e7525d3d-3415-44de-a76a-e6de73a7dc1f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009412 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6g5t\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-kube-api-access-h6g5t\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009523 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009575 4782 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009594 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009611 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009703 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009823 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009833 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009842 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009852 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009886 4782 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009894 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7525d3d-3415-44de-a76a-e6de73a7dc1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009903 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.009912 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7525d3d-3415-44de-a76a-e6de73a7dc1f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.434642 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" event={"ID":"e7525d3d-3415-44de-a76a-e6de73a7dc1f","Type":"ContainerDied","Data":"f846d319f2ce6810555726409fa6e7f2709da5a38f7472975e5377a8ba02eab2"} Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.434688 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f846d319f2ce6810555726409fa6e7f2709da5a38f7472975e5377a8ba02eab2" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.434747 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-w9859" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.539640 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz"] Nov 24 12:29:28 crc kubenswrapper[4782]: E1124 12:29:28.540031 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7525d3d-3415-44de-a76a-e6de73a7dc1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.540049 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7525d3d-3415-44de-a76a-e6de73a7dc1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.540201 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7525d3d-3415-44de-a76a-e6de73a7dc1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.540961 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.545603 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.545790 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.545997 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.546125 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.552542 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.555003 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz"] Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.621016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.621144 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.621189 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.621292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.621319 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbbn\" (UniqueName: \"kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.723803 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.723884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.723924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.724956 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbbn\" (UniqueName: \"kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.725188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.725313 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.727800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.729639 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.737909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.741540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbbn\" (UniqueName: \"kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kc6qz\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:28 crc kubenswrapper[4782]: I1124 12:29:28.858516 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:29:29 crc kubenswrapper[4782]: W1124 12:29:29.419105 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb30e01d5_eac0_49f4_88f5_bf4b5758bf1d.slice/crio-52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73 WatchSource:0}: Error finding container 52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73: Status 404 returned error can't find the container with id 52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73 Nov 24 12:29:29 crc kubenswrapper[4782]: I1124 12:29:29.421121 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz"] Nov 24 12:29:29 crc kubenswrapper[4782]: I1124 12:29:29.447410 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" event={"ID":"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d","Type":"ContainerStarted","Data":"52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73"} Nov 24 12:29:30 crc kubenswrapper[4782]: I1124 12:29:30.458955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" event={"ID":"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d","Type":"ContainerStarted","Data":"044acbfe5bd2be989dd409674de25bb3d0f11f8a22fcff062c2bc2aea6c4d3a3"} Nov 24 12:29:30 crc kubenswrapper[4782]: I1124 12:29:30.481068 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" podStartSLOduration=2.028236167 podStartE2EDuration="2.481046701s" podCreationTimestamp="2025-11-24 12:29:28 +0000 UTC" firstStartedPulling="2025-11-24 12:29:29.422534577 +0000 UTC m=+2018.666368356" lastFinishedPulling="2025-11-24 12:29:29.875345121 +0000 UTC m=+2019.119178890" observedRunningTime="2025-11-24 12:29:30.476196793 +0000 UTC m=+2019.720030572" watchObservedRunningTime="2025-11-24 12:29:30.481046701 +0000 UTC m=+2019.724880470" Nov 24 12:29:31 crc kubenswrapper[4782]: I1124 12:29:31.533801 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:31 crc kubenswrapper[4782]: I1124 12:29:31.581872 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:31 crc kubenswrapper[4782]: I1124 12:29:31.775734 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:33 crc kubenswrapper[4782]: I1124 12:29:33.482397 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxcwl" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="registry-server" containerID="cri-o://7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e" gracePeriod=2 Nov 24 12:29:33 crc kubenswrapper[4782]: I1124 12:29:33.914186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.033782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities\") pod \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.033881 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cblk\" (UniqueName: \"kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk\") pod \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.033922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content\") pod \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\" (UID: \"6f09dacf-9cf5-4479-af8f-1cf3dbe89523\") " Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.034919 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities" (OuterVolumeSpecName: "utilities") pod "6f09dacf-9cf5-4479-af8f-1cf3dbe89523" (UID: "6f09dacf-9cf5-4479-af8f-1cf3dbe89523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.045598 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk" (OuterVolumeSpecName: "kube-api-access-2cblk") pod "6f09dacf-9cf5-4479-af8f-1cf3dbe89523" (UID: "6f09dacf-9cf5-4479-af8f-1cf3dbe89523"). InnerVolumeSpecName "kube-api-access-2cblk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.133912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f09dacf-9cf5-4479-af8f-1cf3dbe89523" (UID: "6f09dacf-9cf5-4479-af8f-1cf3dbe89523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.136163 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cblk\" (UniqueName: \"kubernetes.io/projected/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-kube-api-access-2cblk\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.136206 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.136228 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09dacf-9cf5-4479-af8f-1cf3dbe89523-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.494506 4782 generic.go:334] "Generic (PLEG): container finished" podID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerID="7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e" exitCode=0 Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.494575 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerDied","Data":"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e"} Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.494646 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcwl" event={"ID":"6f09dacf-9cf5-4479-af8f-1cf3dbe89523","Type":"ContainerDied","Data":"6c1456e4bb714f3e43809024d0790001f30cdd8bfdb8f04b54d9915dc3cde67f"} Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.494671 4782 scope.go:117] "RemoveContainer" containerID="7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.494595 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcwl" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.529813 4782 scope.go:117] "RemoveContainer" containerID="dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.547479 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.559558 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxcwl"] Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.568830 4782 scope.go:117] "RemoveContainer" containerID="45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.604382 4782 scope.go:117] "RemoveContainer" containerID="7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e" Nov 24 12:29:34 crc kubenswrapper[4782]: E1124 12:29:34.604812 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e\": container with ID starting with 7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e not found: ID does not exist" containerID="7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.604847 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e"} err="failed to get container status \"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e\": rpc error: code = NotFound desc = could not find container \"7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e\": container with ID starting with 7e5ce22809877190d5fbaae825b053c1daf1e87319e1ab68a177c80b7e26fd8e not found: ID does not exist" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.604874 4782 scope.go:117] "RemoveContainer" containerID="dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f" Nov 24 12:29:34 crc kubenswrapper[4782]: E1124 12:29:34.605097 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f\": container with ID starting with dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f not found: ID does not exist" containerID="dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.605127 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f"} err="failed to get container status \"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f\": rpc error: code = NotFound desc = could not find container \"dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f\": container with ID starting with dbbf02cb4e14bf87ab9a91868647076ec1fc0228da450c03b62ae49995b2bb7f not found: ID does not exist" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.605141 4782 scope.go:117] "RemoveContainer" containerID="45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e" Nov 24 12:29:34 crc kubenswrapper[4782]: E1124 12:29:34.605364 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e\": container with ID starting with 45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e not found: ID does not exist" containerID="45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e" Nov 24 12:29:34 crc kubenswrapper[4782]: I1124 12:29:34.605413 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e"} err="failed to get container status \"45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e\": rpc error: code = NotFound desc = could not find container \"45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e\": container with ID starting with 45aead53b3529ff8566007e336a4b7ceac6782b66b2bbdfacea24a5b0f32349e not found: ID does not exist" Nov 24 12:29:35 crc kubenswrapper[4782]: I1124 12:29:35.505441 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" path="/var/lib/kubelet/pods/6f09dacf-9cf5-4479-af8f-1cf3dbe89523/volumes" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.151227 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd"] Nov 24 12:30:00 crc kubenswrapper[4782]: E1124 12:30:00.152177 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.152190 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4782]: E1124 12:30:00.152217 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.152225 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[4782]: E1124 12:30:00.152247 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.152255 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.152456 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f09dacf-9cf5-4479-af8f-1cf3dbe89523" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.153091 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.155098 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.158037 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.192535 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd"] Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.272851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnl2r\" (UniqueName: \"kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.273001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.273143 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.374847 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.374926 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnl2r\" (UniqueName: \"kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.374994 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.375775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.383270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.393229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnl2r\" (UniqueName: \"kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r\") pod \"collect-profiles-29399790-dbfvd\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.475266 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:00 crc kubenswrapper[4782]: I1124 12:30:00.938085 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd"] Nov 24 12:30:01 crc kubenswrapper[4782]: I1124 12:30:01.746431 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" event={"ID":"40aebb5b-6b51-4446-b7e3-2fb92d723c83","Type":"ContainerStarted","Data":"9a6c8e123dbc1b981a0f479b7f1ec366db8a7d36128014f7f89be6a19b5df0c6"} Nov 24 12:30:01 crc kubenswrapper[4782]: I1124 12:30:01.746774 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" event={"ID":"40aebb5b-6b51-4446-b7e3-2fb92d723c83","Type":"ContainerStarted","Data":"d8a37d5f8bdd129499a3d0d7b1b3cf7a11042f4cda5376b60b28747427b436dd"} Nov 24 12:30:01 crc kubenswrapper[4782]: I1124 12:30:01.763989 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" podStartSLOduration=1.763970426 podStartE2EDuration="1.763970426s" podCreationTimestamp="2025-11-24 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:30:01.759624231 +0000 UTC m=+2051.003458000" watchObservedRunningTime="2025-11-24 12:30:01.763970426 +0000 UTC m=+2051.007804195" Nov 24 12:30:02 crc kubenswrapper[4782]: I1124 12:30:02.756131 4782 generic.go:334] "Generic (PLEG): container finished" podID="40aebb5b-6b51-4446-b7e3-2fb92d723c83" containerID="9a6c8e123dbc1b981a0f479b7f1ec366db8a7d36128014f7f89be6a19b5df0c6" exitCode=0 Nov 24 12:30:02 crc kubenswrapper[4782]: I1124 12:30:02.756181 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" event={"ID":"40aebb5b-6b51-4446-b7e3-2fb92d723c83","Type":"ContainerDied","Data":"9a6c8e123dbc1b981a0f479b7f1ec366db8a7d36128014f7f89be6a19b5df0c6"} Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.114906 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.263255 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnl2r\" (UniqueName: \"kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r\") pod \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.263510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume\") pod \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.263541 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume\") pod \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\" (UID: \"40aebb5b-6b51-4446-b7e3-2fb92d723c83\") " Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.264101 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume" (OuterVolumeSpecName: "config-volume") pod "40aebb5b-6b51-4446-b7e3-2fb92d723c83" (UID: "40aebb5b-6b51-4446-b7e3-2fb92d723c83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.270151 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40aebb5b-6b51-4446-b7e3-2fb92d723c83" (UID: "40aebb5b-6b51-4446-b7e3-2fb92d723c83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.270227 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r" (OuterVolumeSpecName: "kube-api-access-pnl2r") pod "40aebb5b-6b51-4446-b7e3-2fb92d723c83" (UID: "40aebb5b-6b51-4446-b7e3-2fb92d723c83"). InnerVolumeSpecName "kube-api-access-pnl2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.365939 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnl2r\" (UniqueName: \"kubernetes.io/projected/40aebb5b-6b51-4446-b7e3-2fb92d723c83-kube-api-access-pnl2r\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.365967 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40aebb5b-6b51-4446-b7e3-2fb92d723c83-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.365978 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aebb5b-6b51-4446-b7e3-2fb92d723c83-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.590622 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l"] Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.598207 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-5cc8l"] Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.780829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" event={"ID":"40aebb5b-6b51-4446-b7e3-2fb92d723c83","Type":"ContainerDied","Data":"d8a37d5f8bdd129499a3d0d7b1b3cf7a11042f4cda5376b60b28747427b436dd"} Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.780864 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a37d5f8bdd129499a3d0d7b1b3cf7a11042f4cda5376b60b28747427b436dd" Nov 24 12:30:04 crc kubenswrapper[4782]: I1124 12:30:04.780911 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-dbfvd" Nov 24 12:30:05 crc kubenswrapper[4782]: I1124 12:30:05.503565 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d162bb2e-700e-48bf-9f6c-e44b7a009a07" path="/var/lib/kubelet/pods/d162bb2e-700e-48bf-9f6c-e44b7a009a07/volumes" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.628373 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:15 crc kubenswrapper[4782]: E1124 12:30:15.629347 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aebb5b-6b51-4446-b7e3-2fb92d723c83" containerName="collect-profiles" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.629363 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aebb5b-6b51-4446-b7e3-2fb92d723c83" containerName="collect-profiles" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.629668 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="40aebb5b-6b51-4446-b7e3-2fb92d723c83" containerName="collect-profiles" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.631302 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.652603 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.808485 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.810024 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxt9\" (UniqueName: \"kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.810179 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.911713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxt9\" (UniqueName: \"kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.911789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.911882 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.912420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.912492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.946800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxt9\" (UniqueName: \"kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9\") pod \"community-operators-p8fzm\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:15 crc kubenswrapper[4782]: I1124 12:30:15.958297 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:16 crc kubenswrapper[4782]: I1124 12:30:16.385738 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:16 crc kubenswrapper[4782]: I1124 12:30:16.886869 4782 generic.go:334] "Generic (PLEG): container finished" podID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerID="620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185" exitCode=0 Nov 24 12:30:16 crc kubenswrapper[4782]: I1124 12:30:16.887193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerDied","Data":"620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185"} Nov 24 12:30:16 crc kubenswrapper[4782]: I1124 12:30:16.887223 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerStarted","Data":"2026e49288d9f6dc9233b5942336e70bbee5e12f156a96d9805b974819aaf679"} Nov 24 12:30:17 crc kubenswrapper[4782]: I1124 12:30:17.900065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerStarted","Data":"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f"} Nov 24 12:30:18 crc kubenswrapper[4782]: I1124 12:30:18.911248 4782 generic.go:334] "Generic (PLEG): container finished" podID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerID="869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f" exitCode=0 Nov 24 12:30:18 crc kubenswrapper[4782]: I1124 12:30:18.911457 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerDied","Data":"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f"} Nov 24 12:30:19 crc kubenswrapper[4782]: I1124 12:30:19.922156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerStarted","Data":"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5"} Nov 24 12:30:19 crc kubenswrapper[4782]: I1124 12:30:19.945387 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p8fzm" podStartSLOduration=2.559463921 podStartE2EDuration="4.945355456s" podCreationTimestamp="2025-11-24 12:30:15 +0000 UTC" firstStartedPulling="2025-11-24 12:30:16.888437157 +0000 UTC m=+2066.132270916" lastFinishedPulling="2025-11-24 12:30:19.274328662 +0000 UTC m=+2068.518162451" observedRunningTime="2025-11-24 12:30:19.938925166 +0000 UTC m=+2069.182758945" watchObservedRunningTime="2025-11-24 12:30:19.945355456 +0000 UTC m=+2069.189189225" Nov 24 12:30:25 crc kubenswrapper[4782]: I1124 12:30:25.959396 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:25 crc kubenswrapper[4782]: I1124 12:30:25.960525 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:26 crc kubenswrapper[4782]: I1124 12:30:26.005633 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:27 crc kubenswrapper[4782]: I1124 12:30:27.015353 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:27 crc kubenswrapper[4782]: I1124 12:30:27.062154 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:28 crc kubenswrapper[4782]: I1124 12:30:28.988403 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p8fzm" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="registry-server" containerID="cri-o://ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5" gracePeriod=2 Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.479875 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.578960 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxt9\" (UniqueName: \"kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9\") pod \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.579262 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content\") pod \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.579331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities\") pod \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\" (UID: \"a526f155-3a07-4787-96aa-e6b61a8b6a4f\") " Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.581293 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities" (OuterVolumeSpecName: "utilities") pod "a526f155-3a07-4787-96aa-e6b61a8b6a4f" (UID: "a526f155-3a07-4787-96aa-e6b61a8b6a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.585661 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9" (OuterVolumeSpecName: "kube-api-access-rxxt9") pod "a526f155-3a07-4787-96aa-e6b61a8b6a4f" (UID: "a526f155-3a07-4787-96aa-e6b61a8b6a4f"). InnerVolumeSpecName "kube-api-access-rxxt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.632305 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a526f155-3a07-4787-96aa-e6b61a8b6a4f" (UID: "a526f155-3a07-4787-96aa-e6b61a8b6a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.681987 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxt9\" (UniqueName: \"kubernetes.io/projected/a526f155-3a07-4787-96aa-e6b61a8b6a4f-kube-api-access-rxxt9\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.682047 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:29 crc kubenswrapper[4782]: I1124 12:30:29.682060 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a526f155-3a07-4787-96aa-e6b61a8b6a4f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:29.999657 4782 generic.go:334] "Generic (PLEG): container finished" podID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerID="ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5" exitCode=0 Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:29.999999 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerDied","Data":"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5"} Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.000033 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8fzm" event={"ID":"a526f155-3a07-4787-96aa-e6b61a8b6a4f","Type":"ContainerDied","Data":"2026e49288d9f6dc9233b5942336e70bbee5e12f156a96d9805b974819aaf679"} Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.000056 4782 scope.go:117] "RemoveContainer" containerID="ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.000244 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8fzm" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.036080 4782 scope.go:117] "RemoveContainer" containerID="869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.051267 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.059428 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p8fzm"] Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.076542 4782 scope.go:117] "RemoveContainer" containerID="620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.118281 4782 scope.go:117] "RemoveContainer" containerID="ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5" Nov 24 12:30:30 crc kubenswrapper[4782]: E1124 12:30:30.118742 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5\": container with ID starting with ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5 not found: ID does not exist" containerID="ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.118808 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5"} err="failed to get container status \"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5\": rpc error: code = NotFound desc = could not find container \"ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5\": container with ID starting with ca1855c9fc8f97542ffe6ad44901aa30c56466142b6a290d3f73ed8847cb1ce5 not found: ID does not exist" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.118844 4782 scope.go:117] "RemoveContainer" containerID="869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f" Nov 24 12:30:30 crc kubenswrapper[4782]: E1124 12:30:30.119233 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f\": container with ID starting with 869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f not found: ID does not exist" containerID="869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.119278 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f"} err="failed to get container status \"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f\": rpc error: code = NotFound desc = could not find container \"869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f\": container with ID starting with 869f65f4934b743516fd77d52fb557d2864e3d43eb055138afba6aea30482c0f not found: ID does not exist" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.119303 4782 scope.go:117] "RemoveContainer" containerID="620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185" Nov 24 12:30:30 crc kubenswrapper[4782]: E1124 12:30:30.119546 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185\": container with ID starting with 620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185 not found: ID does not exist" containerID="620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185" Nov 24 12:30:30 crc kubenswrapper[4782]: I1124 12:30:30.119579 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185"} err="failed to get container status \"620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185\": rpc error: code = NotFound desc = could not find container \"620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185\": container with ID starting with 620ff96902c1e22b6fa58d801b5d13a20fec7909bc0a3f5ad502c011571a1185 not found: ID does not exist" Nov 24 12:30:31 crc kubenswrapper[4782]: I1124 12:30:31.507639 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" path="/var/lib/kubelet/pods/a526f155-3a07-4787-96aa-e6b61a8b6a4f/volumes" Nov 24 12:30:40 crc kubenswrapper[4782]: I1124 12:30:40.086184 4782 generic.go:334] "Generic (PLEG): container finished" podID="b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" containerID="044acbfe5bd2be989dd409674de25bb3d0f11f8a22fcff062c2bc2aea6c4d3a3" exitCode=0 Nov 24 12:30:40 crc kubenswrapper[4782]: I1124 12:30:40.086401 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" event={"ID":"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d","Type":"ContainerDied","Data":"044acbfe5bd2be989dd409674de25bb3d0f11f8a22fcff062c2bc2aea6c4d3a3"} Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.695195 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.813272 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle\") pod \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.813450 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key\") pod \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.813473 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory\") pod \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.814270 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwbbn\" (UniqueName: \"kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn\") pod \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.814340 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0\") pod \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\" (UID: \"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d\") " Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.839678 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" (UID: "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.840239 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn" (OuterVolumeSpecName: "kube-api-access-wwbbn") pod "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" (UID: "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d"). InnerVolumeSpecName "kube-api-access-wwbbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.843036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" (UID: "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.845354 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory" (OuterVolumeSpecName: "inventory") pod "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" (UID: "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.860518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" (UID: "b30e01d5-eac0-49f4-88f5-bf4b5758bf1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.916532 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.916567 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.916579 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.916592 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwbbn\" (UniqueName: \"kubernetes.io/projected/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-kube-api-access-wwbbn\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:41 crc kubenswrapper[4782]: I1124 12:30:41.916602 4782 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b30e01d5-eac0-49f4-88f5-bf4b5758bf1d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.103691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" event={"ID":"b30e01d5-eac0-49f4-88f5-bf4b5758bf1d","Type":"ContainerDied","Data":"52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73"} Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.103725 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e18bcae5179e93ab29c6228ab85e888a0dce45b7f6f435a99176db1ce7bc73" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.103775 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kc6qz" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.201409 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46"] Nov 24 12:30:42 crc kubenswrapper[4782]: E1124 12:30:42.201773 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.201792 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:42 crc kubenswrapper[4782]: E1124 12:30:42.201812 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="registry-server" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.201818 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="registry-server" Nov 24 12:30:42 crc kubenswrapper[4782]: E1124 12:30:42.201832 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="extract-utilities" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.201840 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="extract-utilities" Nov 24 12:30:42 crc kubenswrapper[4782]: E1124 12:30:42.201870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="extract-content" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.201876 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="extract-content" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.202023 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a526f155-3a07-4787-96aa-e6b61a8b6a4f" containerName="registry-server" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.202040 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b30e01d5-eac0-49f4-88f5-bf4b5758bf1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.202717 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.204836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.205300 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.205589 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.205796 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.207847 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.217334 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46"] Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.218547 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323677 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323748 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvkm\" (UniqueName: \"kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323773 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323939 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.323999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.426329 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.426466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.427295 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.427360 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.427454 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvkm\" (UniqueName: \"kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.427530 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.431434 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.431643 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.431803 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.432689 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.433929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.448228 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvkm\" (UniqueName: \"kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:42 crc kubenswrapper[4782]: I1124 12:30:42.526549 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:30:43 crc kubenswrapper[4782]: I1124 12:30:43.074694 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46"] Nov 24 12:30:43 crc kubenswrapper[4782]: I1124 12:30:43.115682 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" event={"ID":"891636a5-0fde-4436-b3ab-7831d7420439","Type":"ContainerStarted","Data":"f9eb67cab7596183ef69ee2999095153735f3123df14f7c20eca0092ec6ba030"} Nov 24 12:30:44 crc kubenswrapper[4782]: I1124 12:30:44.125610 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" event={"ID":"891636a5-0fde-4436-b3ab-7831d7420439","Type":"ContainerStarted","Data":"78d18f80b5a02e478b1f8508a6d520e004d4800f56b22021d3c95f25a18fce5c"} Nov 24 12:30:44 crc kubenswrapper[4782]: I1124 12:30:44.144898 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" podStartSLOduration=1.728037402 podStartE2EDuration="2.144877512s" podCreationTimestamp="2025-11-24 12:30:42 +0000 UTC" firstStartedPulling="2025-11-24 12:30:43.08760515 +0000 UTC m=+2092.331438919" lastFinishedPulling="2025-11-24 12:30:43.50444526 +0000 UTC m=+2092.748279029" observedRunningTime="2025-11-24 12:30:44.140226289 +0000 UTC m=+2093.384060058" watchObservedRunningTime="2025-11-24 12:30:44.144877512 +0000 UTC m=+2093.388711291" Nov 24 12:31:00 crc kubenswrapper[4782]: I1124 12:31:00.410980 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:31:00 crc kubenswrapper[4782]: I1124 12:31:00.411551 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:31:04 crc kubenswrapper[4782]: I1124 12:31:04.425285 4782 scope.go:117] "RemoveContainer" containerID="6ac812412bd57e4406e3abf47e9f007139693eb34fd65d45ea419080e07d74c6" Nov 24 12:31:30 crc kubenswrapper[4782]: I1124 12:31:30.410493 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:31:30 crc kubenswrapper[4782]: I1124 12:31:30.411023 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:31:38 crc kubenswrapper[4782]: I1124 12:31:38.585271 4782 generic.go:334] "Generic (PLEG): container finished" podID="891636a5-0fde-4436-b3ab-7831d7420439" containerID="78d18f80b5a02e478b1f8508a6d520e004d4800f56b22021d3c95f25a18fce5c" exitCode=0 Nov 24 12:31:38 crc kubenswrapper[4782]: I1124 12:31:38.585386 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" event={"ID":"891636a5-0fde-4436-b3ab-7831d7420439","Type":"ContainerDied","Data":"78d18f80b5a02e478b1f8508a6d520e004d4800f56b22021d3c95f25a18fce5c"} Nov 24 12:31:39 crc kubenswrapper[4782]: I1124 12:31:39.998646 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056021 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cvkm\" (UniqueName: \"kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056111 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056185 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.056215 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle\") pod \"891636a5-0fde-4436-b3ab-7831d7420439\" (UID: \"891636a5-0fde-4436-b3ab-7831d7420439\") " Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.062683 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm" (OuterVolumeSpecName: "kube-api-access-5cvkm") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "kube-api-access-5cvkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.075631 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.086574 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.091856 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.095303 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.105725 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory" (OuterVolumeSpecName: "inventory") pod "891636a5-0fde-4436-b3ab-7831d7420439" (UID: "891636a5-0fde-4436-b3ab-7831d7420439"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.158636 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cvkm\" (UniqueName: \"kubernetes.io/projected/891636a5-0fde-4436-b3ab-7831d7420439-kube-api-access-5cvkm\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.158881 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.158945 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.159017 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.159073 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.159129 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891636a5-0fde-4436-b3ab-7831d7420439-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.604299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" event={"ID":"891636a5-0fde-4436-b3ab-7831d7420439","Type":"ContainerDied","Data":"f9eb67cab7596183ef69ee2999095153735f3123df14f7c20eca0092ec6ba030"} Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.604359 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9eb67cab7596183ef69ee2999095153735f3123df14f7c20eca0092ec6ba030" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.604401 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.712402 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk"] Nov 24 12:31:40 crc kubenswrapper[4782]: E1124 12:31:40.712795 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891636a5-0fde-4436-b3ab-7831d7420439" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.712809 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="891636a5-0fde-4436-b3ab-7831d7420439" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.713027 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="891636a5-0fde-4436-b3ab-7831d7420439" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.713694 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.716511 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.718625 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.721886 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.722061 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.722118 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.729829 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk"] Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.769145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.769497 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.769523 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.769553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv95z\" (UniqueName: \"kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.769608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.870629 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.870719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.870767 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.870785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.871004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv95z\" (UniqueName: \"kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.877463 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.878208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.878299 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.878480 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:40 crc kubenswrapper[4782]: I1124 12:31:40.899237 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv95z\" (UniqueName: \"kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:41 crc kubenswrapper[4782]: I1124 12:31:41.070560 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:31:41 crc kubenswrapper[4782]: I1124 12:31:41.581086 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk"] Nov 24 12:31:41 crc kubenswrapper[4782]: W1124 12:31:41.587916 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1af97733_205a_4fc3_804c_77517c7053db.slice/crio-c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a WatchSource:0}: Error finding container c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a: Status 404 returned error can't find the container with id c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a Nov 24 12:31:41 crc kubenswrapper[4782]: I1124 12:31:41.613849 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" event={"ID":"1af97733-205a-4fc3-804c-77517c7053db","Type":"ContainerStarted","Data":"c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a"} Nov 24 12:31:42 crc kubenswrapper[4782]: I1124 12:31:42.626560 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" event={"ID":"1af97733-205a-4fc3-804c-77517c7053db","Type":"ContainerStarted","Data":"6ef05d1c4f0ca96603d511f13012d7fe9f5fe4fa4572de9945807b93b587d538"} Nov 24 12:31:42 crc kubenswrapper[4782]: I1124 12:31:42.668032 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" podStartSLOduration=2.213869662 podStartE2EDuration="2.668015611s" podCreationTimestamp="2025-11-24 12:31:40 +0000 UTC" firstStartedPulling="2025-11-24 12:31:41.590170474 +0000 UTC m=+2150.834004243" lastFinishedPulling="2025-11-24 12:31:42.044316423 +0000 UTC m=+2151.288150192" observedRunningTime="2025-11-24 12:31:42.657433421 +0000 UTC m=+2151.901267190" watchObservedRunningTime="2025-11-24 12:31:42.668015611 +0000 UTC m=+2151.911849380" Nov 24 12:32:00 crc kubenswrapper[4782]: I1124 12:32:00.410752 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:32:00 crc kubenswrapper[4782]: I1124 12:32:00.411341 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:32:00 crc kubenswrapper[4782]: I1124 12:32:00.411420 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:32:00 crc kubenswrapper[4782]: I1124 12:32:00.412252 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:32:00 crc kubenswrapper[4782]: I1124 12:32:00.412308 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" gracePeriod=600 Nov 24 12:32:00 crc kubenswrapper[4782]: E1124 12:32:00.540150 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:32:01 crc kubenswrapper[4782]: I1124 12:32:01.026613 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" exitCode=0 Nov 24 12:32:01 crc kubenswrapper[4782]: I1124 12:32:01.026925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059"} Nov 24 12:32:01 crc kubenswrapper[4782]: I1124 12:32:01.026958 4782 scope.go:117] "RemoveContainer" containerID="e5f5e190d98771d99805cc8ad5110104a3ace9bfc8a7a349a68c23899adc8da6" Nov 24 12:32:01 crc kubenswrapper[4782]: I1124 12:32:01.028227 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:32:01 crc kubenswrapper[4782]: E1124 12:32:01.028523 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:32:11 crc kubenswrapper[4782]: I1124 12:32:11.499570 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:32:11 crc kubenswrapper[4782]: E1124 12:32:11.500506 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:32:25 crc kubenswrapper[4782]: I1124 12:32:25.492322 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:32:25 crc kubenswrapper[4782]: E1124 12:32:25.493271 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:32:37 crc kubenswrapper[4782]: I1124 12:32:37.491655 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:32:37 crc kubenswrapper[4782]: E1124 12:32:37.492420 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:32:50 crc kubenswrapper[4782]: E1124 12:32:50.972115 4782 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.482s" Nov 24 12:32:50 crc kubenswrapper[4782]: I1124 12:32:50.977938 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:32:50 crc kubenswrapper[4782]: E1124 12:32:50.978137 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:33:04 crc kubenswrapper[4782]: I1124 12:33:04.492333 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:33:04 crc kubenswrapper[4782]: E1124 12:33:04.493153 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:33:18 crc kubenswrapper[4782]: I1124 12:33:18.491606 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:33:18 crc kubenswrapper[4782]: E1124 12:33:18.492248 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:33:33 crc kubenswrapper[4782]: I1124 12:33:33.491601 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:33:33 crc kubenswrapper[4782]: E1124 12:33:33.492272 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:33:47 crc kubenswrapper[4782]: I1124 12:33:47.491349 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:33:47 crc kubenswrapper[4782]: E1124 12:33:47.492174 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:34:02 crc kubenswrapper[4782]: I1124 12:34:02.491213 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:34:02 crc kubenswrapper[4782]: E1124 12:34:02.491898 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:34:16 crc kubenswrapper[4782]: I1124 12:34:16.491611 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:34:16 crc kubenswrapper[4782]: E1124 12:34:16.492580 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:34:27 crc kubenswrapper[4782]: I1124 12:34:27.492165 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:34:27 crc kubenswrapper[4782]: E1124 12:34:27.493289 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:34:39 crc kubenswrapper[4782]: I1124 12:34:39.491865 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:34:39 crc kubenswrapper[4782]: E1124 12:34:39.492915 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:34:53 crc kubenswrapper[4782]: I1124 12:34:53.492070 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:34:53 crc kubenswrapper[4782]: E1124 12:34:53.494014 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:35:05 crc kubenswrapper[4782]: I1124 12:35:05.490940 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:35:05 crc kubenswrapper[4782]: E1124 12:35:05.491961 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:35:17 crc kubenswrapper[4782]: I1124 12:35:17.491178 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:35:17 crc kubenswrapper[4782]: E1124 12:35:17.492755 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:35:31 crc kubenswrapper[4782]: I1124 12:35:31.497726 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:35:31 crc kubenswrapper[4782]: E1124 12:35:31.498873 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:35:46 crc kubenswrapper[4782]: I1124 12:35:46.491185 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:35:46 crc kubenswrapper[4782]: E1124 12:35:46.491964 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:36:01 crc kubenswrapper[4782]: I1124 12:36:01.499867 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:36:01 crc kubenswrapper[4782]: E1124 12:36:01.500741 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:36:12 crc kubenswrapper[4782]: I1124 12:36:12.491153 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:36:12 crc kubenswrapper[4782]: E1124 12:36:12.491990 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:36:24 crc kubenswrapper[4782]: I1124 12:36:24.491083 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:36:24 crc kubenswrapper[4782]: E1124 12:36:24.491850 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:36:30 crc kubenswrapper[4782]: I1124 12:36:30.965532 4782 generic.go:334] "Generic (PLEG): container finished" podID="1af97733-205a-4fc3-804c-77517c7053db" containerID="6ef05d1c4f0ca96603d511f13012d7fe9f5fe4fa4572de9945807b93b587d538" exitCode=0 Nov 24 12:36:30 crc kubenswrapper[4782]: I1124 12:36:30.965612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" event={"ID":"1af97733-205a-4fc3-804c-77517c7053db","Type":"ContainerDied","Data":"6ef05d1c4f0ca96603d511f13012d7fe9f5fe4fa4572de9945807b93b587d538"} Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.397079 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.504466 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0\") pod \"1af97733-205a-4fc3-804c-77517c7053db\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.504860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory\") pod \"1af97733-205a-4fc3-804c-77517c7053db\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.504887 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv95z\" (UniqueName: \"kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z\") pod \"1af97733-205a-4fc3-804c-77517c7053db\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.505018 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle\") pod \"1af97733-205a-4fc3-804c-77517c7053db\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.505088 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key\") pod \"1af97733-205a-4fc3-804c-77517c7053db\" (UID: \"1af97733-205a-4fc3-804c-77517c7053db\") " Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.518339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z" (OuterVolumeSpecName: "kube-api-access-xv95z") pod "1af97733-205a-4fc3-804c-77517c7053db" (UID: "1af97733-205a-4fc3-804c-77517c7053db"). InnerVolumeSpecName "kube-api-access-xv95z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.521613 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1af97733-205a-4fc3-804c-77517c7053db" (UID: "1af97733-205a-4fc3-804c-77517c7053db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.531358 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory" (OuterVolumeSpecName: "inventory") pod "1af97733-205a-4fc3-804c-77517c7053db" (UID: "1af97733-205a-4fc3-804c-77517c7053db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.532529 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1af97733-205a-4fc3-804c-77517c7053db" (UID: "1af97733-205a-4fc3-804c-77517c7053db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.544093 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1af97733-205a-4fc3-804c-77517c7053db" (UID: "1af97733-205a-4fc3-804c-77517c7053db"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.607092 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.607125 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.607136 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.607145 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv95z\" (UniqueName: \"kubernetes.io/projected/1af97733-205a-4fc3-804c-77517c7053db-kube-api-access-xv95z\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.607153 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af97733-205a-4fc3-804c-77517c7053db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.991971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" event={"ID":"1af97733-205a-4fc3-804c-77517c7053db","Type":"ContainerDied","Data":"c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a"} Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.992010 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58bc7e2bf86e7d15753bcded8341fca5a597f721e72aed0f0a0de047d1f919a" Nov 24 12:36:32 crc kubenswrapper[4782]: I1124 12:36:32.992041 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.145148 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47"] Nov 24 12:36:33 crc kubenswrapper[4782]: E1124 12:36:33.145666 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af97733-205a-4fc3-804c-77517c7053db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.145689 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af97733-205a-4fc3-804c-77517c7053db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.145931 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af97733-205a-4fc3-804c-77517c7053db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.146757 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.149410 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.149607 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.149720 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.149887 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.149993 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.150088 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.150294 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.155204 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47"] Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347240 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347303 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkm2z\" (UniqueName: \"kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347327 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347565 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.347624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449226 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449280 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449346 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449425 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449465 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449512 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkm2z\" (UniqueName: \"kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.449531 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.450416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.453932 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.453976 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.453992 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.455137 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.455171 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.455621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.459185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.472206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkm2z\" (UniqueName: \"kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5vt47\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.477156 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:36:33 crc kubenswrapper[4782]: I1124 12:36:33.994120 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47"] Nov 24 12:36:34 crc kubenswrapper[4782]: I1124 12:36:34.005161 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:36:35 crc kubenswrapper[4782]: I1124 12:36:35.018209 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" event={"ID":"6cd5f290-1276-4bbf-a7c0-9075e776dd0b","Type":"ContainerStarted","Data":"705aa582a9ee3d5b68f6eed0de8457c0737436c0d5af69dd16c67d1757dd3df5"} Nov 24 12:36:35 crc kubenswrapper[4782]: I1124 12:36:35.018544 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" event={"ID":"6cd5f290-1276-4bbf-a7c0-9075e776dd0b","Type":"ContainerStarted","Data":"6f418689350d918665913e26eb03f286113b30efcd31f2879715ab55e87c823a"} Nov 24 12:36:35 crc kubenswrapper[4782]: I1124 12:36:35.053655 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" podStartSLOduration=1.568294799 podStartE2EDuration="2.053633717s" podCreationTimestamp="2025-11-24 12:36:33 +0000 UTC" firstStartedPulling="2025-11-24 12:36:34.004976416 +0000 UTC m=+2443.248810185" lastFinishedPulling="2025-11-24 12:36:34.490315324 +0000 UTC m=+2443.734149103" observedRunningTime="2025-11-24 12:36:35.044859499 +0000 UTC m=+2444.288693268" watchObservedRunningTime="2025-11-24 12:36:35.053633717 +0000 UTC m=+2444.297467486" Nov 24 12:36:36 crc kubenswrapper[4782]: I1124 12:36:36.490867 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:36:36 crc kubenswrapper[4782]: E1124 12:36:36.491497 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:36:48 crc kubenswrapper[4782]: I1124 12:36:48.491336 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:36:48 crc kubenswrapper[4782]: E1124 12:36:48.491968 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:37:00 crc kubenswrapper[4782]: I1124 12:37:00.491462 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:37:01 crc kubenswrapper[4782]: I1124 12:37:01.245518 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd"} Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.585972 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.594641 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.627323 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.675498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz855\" (UniqueName: \"kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.675621 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.675761 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.777498 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz855\" (UniqueName: \"kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.777610 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.777718 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.778193 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.778208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.805220 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz855\" (UniqueName: \"kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855\") pod \"redhat-marketplace-g9cgv\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:31 crc kubenswrapper[4782]: I1124 12:38:31.934819 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:32 crc kubenswrapper[4782]: I1124 12:38:32.581183 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:33 crc kubenswrapper[4782]: I1124 12:38:33.009163 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerID="98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f" exitCode=0 Nov 24 12:38:33 crc kubenswrapper[4782]: I1124 12:38:33.009432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerDied","Data":"98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f"} Nov 24 12:38:33 crc kubenswrapper[4782]: I1124 12:38:33.009497 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerStarted","Data":"89d9214dc6bb00e73ee004a93716a509a6794704f3cd9c4b53e615cafd6b9886"} Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.175050 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.177487 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.195899 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.247145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5bm\" (UniqueName: \"kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.247792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.247935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.349701 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.349821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5bm\" (UniqueName: \"kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.349901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.350242 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.350341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.375123 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5bm\" (UniqueName: \"kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm\") pod \"certified-operators-fpnzj\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:34 crc kubenswrapper[4782]: I1124 12:38:34.538279 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:35 crc kubenswrapper[4782]: I1124 12:38:35.031142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerStarted","Data":"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200"} Nov 24 12:38:35 crc kubenswrapper[4782]: W1124 12:38:35.110532 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bc86b5_3f68_40c5_8495_7fbfc4a40407.slice/crio-5e7c48c3e60738184b03107ed2d13ef02b53cd1e7df77555d012cda3e6f274ba WatchSource:0}: Error finding container 5e7c48c3e60738184b03107ed2d13ef02b53cd1e7df77555d012cda3e6f274ba: Status 404 returned error can't find the container with id 5e7c48c3e60738184b03107ed2d13ef02b53cd1e7df77555d012cda3e6f274ba Nov 24 12:38:35 crc kubenswrapper[4782]: I1124 12:38:35.113949 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:36 crc kubenswrapper[4782]: I1124 12:38:36.043236 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerID="9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200" exitCode=0 Nov 24 12:38:36 crc kubenswrapper[4782]: I1124 12:38:36.043801 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerDied","Data":"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200"} Nov 24 12:38:36 crc kubenswrapper[4782]: I1124 12:38:36.047640 4782 generic.go:334] "Generic (PLEG): container finished" podID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerID="17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957" exitCode=0 Nov 24 12:38:36 crc kubenswrapper[4782]: I1124 12:38:36.047678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerDied","Data":"17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957"} Nov 24 12:38:36 crc kubenswrapper[4782]: I1124 12:38:36.047704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerStarted","Data":"5e7c48c3e60738184b03107ed2d13ef02b53cd1e7df77555d012cda3e6f274ba"} Nov 24 12:38:37 crc kubenswrapper[4782]: I1124 12:38:37.056928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerStarted","Data":"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4"} Nov 24 12:38:37 crc kubenswrapper[4782]: I1124 12:38:37.059365 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerStarted","Data":"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2"} Nov 24 12:38:37 crc kubenswrapper[4782]: I1124 12:38:37.099663 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g9cgv" podStartSLOduration=2.67211041 podStartE2EDuration="6.099645099s" podCreationTimestamp="2025-11-24 12:38:31 +0000 UTC" firstStartedPulling="2025-11-24 12:38:33.013231565 +0000 UTC m=+2562.257065334" lastFinishedPulling="2025-11-24 12:38:36.440766254 +0000 UTC m=+2565.684600023" observedRunningTime="2025-11-24 12:38:37.095071185 +0000 UTC m=+2566.338904954" watchObservedRunningTime="2025-11-24 12:38:37.099645099 +0000 UTC m=+2566.343478868" Nov 24 12:38:39 crc kubenswrapper[4782]: I1124 12:38:39.077739 4782 generic.go:334] "Generic (PLEG): container finished" podID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerID="be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4" exitCode=0 Nov 24 12:38:39 crc kubenswrapper[4782]: I1124 12:38:39.077809 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerDied","Data":"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4"} Nov 24 12:38:40 crc kubenswrapper[4782]: I1124 12:38:40.088826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerStarted","Data":"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7"} Nov 24 12:38:40 crc kubenswrapper[4782]: I1124 12:38:40.112308 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fpnzj" podStartSLOduration=2.616738475 podStartE2EDuration="6.112291779s" podCreationTimestamp="2025-11-24 12:38:34 +0000 UTC" firstStartedPulling="2025-11-24 12:38:36.049253643 +0000 UTC m=+2565.293087422" lastFinishedPulling="2025-11-24 12:38:39.544806957 +0000 UTC m=+2568.788640726" observedRunningTime="2025-11-24 12:38:40.11084525 +0000 UTC m=+2569.354679019" watchObservedRunningTime="2025-11-24 12:38:40.112291779 +0000 UTC m=+2569.356125558" Nov 24 12:38:41 crc kubenswrapper[4782]: I1124 12:38:41.935306 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:41 crc kubenswrapper[4782]: I1124 12:38:41.935655 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:42 crc kubenswrapper[4782]: I1124 12:38:42.017977 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:42 crc kubenswrapper[4782]: I1124 12:38:42.149912 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:43 crc kubenswrapper[4782]: I1124 12:38:43.161406 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.121707 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g9cgv" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="registry-server" containerID="cri-o://2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2" gracePeriod=2 Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.538434 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.541233 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.586726 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.595599 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.659299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities\") pod \"1d485a0a-78bf-4c19-b130-522f091bb5b4\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.659442 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content\") pod \"1d485a0a-78bf-4c19-b130-522f091bb5b4\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.659537 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz855\" (UniqueName: \"kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855\") pod \"1d485a0a-78bf-4c19-b130-522f091bb5b4\" (UID: \"1d485a0a-78bf-4c19-b130-522f091bb5b4\") " Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.660830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities" (OuterVolumeSpecName: "utilities") pod "1d485a0a-78bf-4c19-b130-522f091bb5b4" (UID: "1d485a0a-78bf-4c19-b130-522f091bb5b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.672356 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855" (OuterVolumeSpecName: "kube-api-access-dz855") pod "1d485a0a-78bf-4c19-b130-522f091bb5b4" (UID: "1d485a0a-78bf-4c19-b130-522f091bb5b4"). InnerVolumeSpecName "kube-api-access-dz855". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.678209 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d485a0a-78bf-4c19-b130-522f091bb5b4" (UID: "1d485a0a-78bf-4c19-b130-522f091bb5b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.761797 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.761882 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d485a0a-78bf-4c19-b130-522f091bb5b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:44 crc kubenswrapper[4782]: I1124 12:38:44.761899 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz855\" (UniqueName: \"kubernetes.io/projected/1d485a0a-78bf-4c19-b130-522f091bb5b4-kube-api-access-dz855\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.132803 4782 generic.go:334] "Generic (PLEG): container finished" podID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerID="2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2" exitCode=0 Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.132906 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9cgv" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.132911 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerDied","Data":"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2"} Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.132949 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9cgv" event={"ID":"1d485a0a-78bf-4c19-b130-522f091bb5b4","Type":"ContainerDied","Data":"89d9214dc6bb00e73ee004a93716a509a6794704f3cd9c4b53e615cafd6b9886"} Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.132968 4782 scope.go:117] "RemoveContainer" containerID="2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.166560 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.173556 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9cgv"] Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.182300 4782 scope.go:117] "RemoveContainer" containerID="9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.187065 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.227946 4782 scope.go:117] "RemoveContainer" containerID="98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.265136 4782 scope.go:117] "RemoveContainer" containerID="2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2" Nov 24 12:38:45 crc kubenswrapper[4782]: E1124 12:38:45.265948 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2\": container with ID starting with 2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2 not found: ID does not exist" containerID="2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.265988 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2"} err="failed to get container status \"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2\": rpc error: code = NotFound desc = could not find container \"2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2\": container with ID starting with 2c6fac3874642af5a98da22f01679b04669094b346c63cc0c9acf1e3b3f234e2 not found: ID does not exist" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.266018 4782 scope.go:117] "RemoveContainer" containerID="9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200" Nov 24 12:38:45 crc kubenswrapper[4782]: E1124 12:38:45.266463 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200\": container with ID starting with 9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200 not found: ID does not exist" containerID="9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.266493 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200"} err="failed to get container status \"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200\": rpc error: code = NotFound desc = could not find container \"9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200\": container with ID starting with 9f47c049abe5842485eb8af06af21ddca8168321b79d2d0a0a4342ab425c4200 not found: ID does not exist" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.266509 4782 scope.go:117] "RemoveContainer" containerID="98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f" Nov 24 12:38:45 crc kubenswrapper[4782]: E1124 12:38:45.266775 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f\": container with ID starting with 98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f not found: ID does not exist" containerID="98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.266806 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f"} err="failed to get container status \"98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f\": rpc error: code = NotFound desc = could not find container \"98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f\": container with ID starting with 98728d4c9bec83ff000d52004c6dbbbd6412e11158d3092f5a4e65cdfb55950f not found: ID does not exist" Nov 24 12:38:45 crc kubenswrapper[4782]: I1124 12:38:45.508109 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" path="/var/lib/kubelet/pods/1d485a0a-78bf-4c19-b130-522f091bb5b4/volumes" Nov 24 12:38:46 crc kubenswrapper[4782]: I1124 12:38:46.964509 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.159494 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fpnzj" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="registry-server" containerID="cri-o://38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7" gracePeriod=2 Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.592528 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.643195 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content\") pod \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.643289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5bm\" (UniqueName: \"kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm\") pod \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.643324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities\") pod \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\" (UID: \"71bc86b5-3f68-40c5-8495-7fbfc4a40407\") " Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.644788 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities" (OuterVolumeSpecName: "utilities") pod "71bc86b5-3f68-40c5-8495-7fbfc4a40407" (UID: "71bc86b5-3f68-40c5-8495-7fbfc4a40407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.660624 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm" (OuterVolumeSpecName: "kube-api-access-ps5bm") pod "71bc86b5-3f68-40c5-8495-7fbfc4a40407" (UID: "71bc86b5-3f68-40c5-8495-7fbfc4a40407"). InnerVolumeSpecName "kube-api-access-ps5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.692251 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71bc86b5-3f68-40c5-8495-7fbfc4a40407" (UID: "71bc86b5-3f68-40c5-8495-7fbfc4a40407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.745351 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.745401 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5bm\" (UniqueName: \"kubernetes.io/projected/71bc86b5-3f68-40c5-8495-7fbfc4a40407-kube-api-access-ps5bm\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:48 crc kubenswrapper[4782]: I1124 12:38:48.745413 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71bc86b5-3f68-40c5-8495-7fbfc4a40407-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.170218 4782 generic.go:334] "Generic (PLEG): container finished" podID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerID="38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7" exitCode=0 Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.170257 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fpnzj" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.170286 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerDied","Data":"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7"} Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.170324 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fpnzj" event={"ID":"71bc86b5-3f68-40c5-8495-7fbfc4a40407","Type":"ContainerDied","Data":"5e7c48c3e60738184b03107ed2d13ef02b53cd1e7df77555d012cda3e6f274ba"} Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.170350 4782 scope.go:117] "RemoveContainer" containerID="38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.191319 4782 scope.go:117] "RemoveContainer" containerID="be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.220571 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.227735 4782 scope.go:117] "RemoveContainer" containerID="17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.237428 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fpnzj"] Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.268293 4782 scope.go:117] "RemoveContainer" containerID="38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7" Nov 24 12:38:49 crc kubenswrapper[4782]: E1124 12:38:49.268720 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7\": container with ID starting with 38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7 not found: ID does not exist" containerID="38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.268799 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7"} err="failed to get container status \"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7\": rpc error: code = NotFound desc = could not find container \"38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7\": container with ID starting with 38a0e49f424b83f8491bb31598c26079547e95c78f815262674f7f3dbdd007b7 not found: ID does not exist" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.268838 4782 scope.go:117] "RemoveContainer" containerID="be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4" Nov 24 12:38:49 crc kubenswrapper[4782]: E1124 12:38:49.269252 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4\": container with ID starting with be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4 not found: ID does not exist" containerID="be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.269288 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4"} err="failed to get container status \"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4\": rpc error: code = NotFound desc = could not find container \"be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4\": container with ID starting with be49c82bab1c0fe71304bb11bb7507012cb96d3353956b2bb5d80f762b6ddaa4 not found: ID does not exist" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.269306 4782 scope.go:117] "RemoveContainer" containerID="17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957" Nov 24 12:38:49 crc kubenswrapper[4782]: E1124 12:38:49.269659 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957\": container with ID starting with 17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957 not found: ID does not exist" containerID="17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.269694 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957"} err="failed to get container status \"17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957\": rpc error: code = NotFound desc = could not find container \"17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957\": container with ID starting with 17db1756be3fd01e62b795214408780354f21568c5aecfeb4c24ddaabf60f957 not found: ID does not exist" Nov 24 12:38:49 crc kubenswrapper[4782]: I1124 12:38:49.502683 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" path="/var/lib/kubelet/pods/71bc86b5-3f68-40c5-8495-7fbfc4a40407/volumes" Nov 24 12:39:00 crc kubenswrapper[4782]: I1124 12:39:00.410687 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:39:00 crc kubenswrapper[4782]: I1124 12:39:00.411177 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.410923 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.411674 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.572657 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573119 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="extract-utilities" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573138 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="extract-utilities" Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573164 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="extract-content" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573170 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="extract-content" Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573177 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573183 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573195 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="extract-utilities" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573203 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="extract-utilities" Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573220 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="extract-content" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573228 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="extract-content" Nov 24 12:39:30 crc kubenswrapper[4782]: E1124 12:39:30.573248 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573253 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573487 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d485a0a-78bf-4c19-b130-522f091bb5b4" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.573519 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="71bc86b5-3f68-40c5-8495-7fbfc4a40407" containerName="registry-server" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.575187 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.589561 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.610639 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.610701 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.610784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmw6\" (UniqueName: \"kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.711987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbmw6\" (UniqueName: \"kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.712136 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.712174 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.712780 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.712789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.739126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbmw6\" (UniqueName: \"kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6\") pod \"redhat-operators-d428q\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:30 crc kubenswrapper[4782]: I1124 12:39:30.937636 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:31 crc kubenswrapper[4782]: I1124 12:39:31.417855 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:31 crc kubenswrapper[4782]: I1124 12:39:31.524327 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerStarted","Data":"7e006323ba5023f23fac39e3e683ef2ec6854cb93d39f71ca8221224bbf43b3b"} Nov 24 12:39:32 crc kubenswrapper[4782]: I1124 12:39:32.533864 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerID="8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25" exitCode=0 Nov 24 12:39:32 crc kubenswrapper[4782]: I1124 12:39:32.533901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerDied","Data":"8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25"} Nov 24 12:39:33 crc kubenswrapper[4782]: I1124 12:39:33.549594 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerStarted","Data":"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d"} Nov 24 12:39:37 crc kubenswrapper[4782]: I1124 12:39:37.586841 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerID="400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d" exitCode=0 Nov 24 12:39:37 crc kubenswrapper[4782]: I1124 12:39:37.586908 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerDied","Data":"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d"} Nov 24 12:39:38 crc kubenswrapper[4782]: I1124 12:39:38.600434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerStarted","Data":"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5"} Nov 24 12:39:38 crc kubenswrapper[4782]: I1124 12:39:38.627783 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d428q" podStartSLOduration=3.189909342 podStartE2EDuration="8.627755446s" podCreationTimestamp="2025-11-24 12:39:30 +0000 UTC" firstStartedPulling="2025-11-24 12:39:32.53554074 +0000 UTC m=+2621.779374509" lastFinishedPulling="2025-11-24 12:39:37.973386844 +0000 UTC m=+2627.217220613" observedRunningTime="2025-11-24 12:39:38.626690937 +0000 UTC m=+2627.870524706" watchObservedRunningTime="2025-11-24 12:39:38.627755446 +0000 UTC m=+2627.871589215" Nov 24 12:39:40 crc kubenswrapper[4782]: I1124 12:39:40.938058 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:40 crc kubenswrapper[4782]: I1124 12:39:40.938413 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:41 crc kubenswrapper[4782]: I1124 12:39:41.983777 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d428q" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="registry-server" probeResult="failure" output=< Nov 24 12:39:41 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:39:41 crc kubenswrapper[4782]: > Nov 24 12:39:44 crc kubenswrapper[4782]: I1124 12:39:44.656814 4782 generic.go:334] "Generic (PLEG): container finished" podID="6cd5f290-1276-4bbf-a7c0-9075e776dd0b" containerID="705aa582a9ee3d5b68f6eed0de8457c0737436c0d5af69dd16c67d1757dd3df5" exitCode=0 Nov 24 12:39:44 crc kubenswrapper[4782]: I1124 12:39:44.657460 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" event={"ID":"6cd5f290-1276-4bbf-a7c0-9075e776dd0b","Type":"ContainerDied","Data":"705aa582a9ee3d5b68f6eed0de8457c0737436c0d5af69dd16c67d1757dd3df5"} Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.142994 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.199977 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkm2z\" (UniqueName: \"kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200223 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200270 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200291 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200509 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.200584 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0\") pod \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\" (UID: \"6cd5f290-1276-4bbf-a7c0-9075e776dd0b\") " Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.213858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.213916 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z" (OuterVolumeSpecName: "kube-api-access-kkm2z") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "kube-api-access-kkm2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.230721 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.243598 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.251970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.257567 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory" (OuterVolumeSpecName: "inventory") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.261457 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.264196 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.265251 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cd5f290-1276-4bbf-a7c0-9075e776dd0b" (UID: "6cd5f290-1276-4bbf-a7c0-9075e776dd0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304068 4782 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304100 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304111 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkm2z\" (UniqueName: \"kubernetes.io/projected/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-kube-api-access-kkm2z\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304120 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304129 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304138 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304146 4782 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304154 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.304162 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cd5f290-1276-4bbf-a7c0-9075e776dd0b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.675488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" event={"ID":"6cd5f290-1276-4bbf-a7c0-9075e776dd0b","Type":"ContainerDied","Data":"6f418689350d918665913e26eb03f286113b30efcd31f2879715ab55e87c823a"} Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.675542 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f418689350d918665913e26eb03f286113b30efcd31f2879715ab55e87c823a" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.675587 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5vt47" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.842534 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr"] Nov 24 12:39:46 crc kubenswrapper[4782]: E1124 12:39:46.843945 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd5f290-1276-4bbf-a7c0-9075e776dd0b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.843968 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd5f290-1276-4bbf-a7c0-9075e776dd0b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.844923 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd5f290-1276-4bbf-a7c0-9075e776dd0b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.846066 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.853085 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.854222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.854543 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f8lr6" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.854764 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.861263 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.876100 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr"] Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.915725 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.915975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.916073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdpp\" (UniqueName: \"kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.916236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.916348 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.916505 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:46 crc kubenswrapper[4782]: I1124 12:39:46.916699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018170 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdpp\" (UniqueName: \"kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018300 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018333 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.018442 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.023016 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.023915 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.027013 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.027161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.027352 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.032397 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.035974 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdpp\" (UniqueName: \"kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-snkfr\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.172086 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:39:47 crc kubenswrapper[4782]: I1124 12:39:47.716049 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr"] Nov 24 12:39:47 crc kubenswrapper[4782]: W1124 12:39:47.717838 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9db5a23_263f_41cc_a1b6_b90df79aa8d2.slice/crio-ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b WatchSource:0}: Error finding container ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b: Status 404 returned error can't find the container with id ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b Nov 24 12:39:48 crc kubenswrapper[4782]: I1124 12:39:48.698216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" event={"ID":"c9db5a23-263f-41cc-a1b6-b90df79aa8d2","Type":"ContainerStarted","Data":"06ee33c3b71f5197305afff3038283232f3ea7d8eec84f98a7e4c1763e271dba"} Nov 24 12:39:48 crc kubenswrapper[4782]: I1124 12:39:48.698610 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" event={"ID":"c9db5a23-263f-41cc-a1b6-b90df79aa8d2","Type":"ContainerStarted","Data":"ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b"} Nov 24 12:39:48 crc kubenswrapper[4782]: I1124 12:39:48.714997 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" podStartSLOduration=2.293844033 podStartE2EDuration="2.714977912s" podCreationTimestamp="2025-11-24 12:39:46 +0000 UTC" firstStartedPulling="2025-11-24 12:39:47.72010878 +0000 UTC m=+2636.963942549" lastFinishedPulling="2025-11-24 12:39:48.141242659 +0000 UTC m=+2637.385076428" observedRunningTime="2025-11-24 12:39:48.713754039 +0000 UTC m=+2637.957587808" watchObservedRunningTime="2025-11-24 12:39:48.714977912 +0000 UTC m=+2637.958811681" Nov 24 12:39:51 crc kubenswrapper[4782]: I1124 12:39:51.109884 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:51 crc kubenswrapper[4782]: I1124 12:39:51.165240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:51 crc kubenswrapper[4782]: I1124 12:39:51.347682 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:52 crc kubenswrapper[4782]: I1124 12:39:52.733703 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d428q" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="registry-server" containerID="cri-o://610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5" gracePeriod=2 Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.326195 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.513779 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities\") pod \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.513850 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content\") pod \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.513888 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbmw6\" (UniqueName: \"kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6\") pod \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\" (UID: \"8b6d00bb-518d-41ef-83d5-174fc5e70c74\") " Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.514772 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities" (OuterVolumeSpecName: "utilities") pod "8b6d00bb-518d-41ef-83d5-174fc5e70c74" (UID: "8b6d00bb-518d-41ef-83d5-174fc5e70c74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.528593 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6" (OuterVolumeSpecName: "kube-api-access-tbmw6") pod "8b6d00bb-518d-41ef-83d5-174fc5e70c74" (UID: "8b6d00bb-518d-41ef-83d5-174fc5e70c74"). InnerVolumeSpecName "kube-api-access-tbmw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.614272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b6d00bb-518d-41ef-83d5-174fc5e70c74" (UID: "8b6d00bb-518d-41ef-83d5-174fc5e70c74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.616381 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.616495 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbmw6\" (UniqueName: \"kubernetes.io/projected/8b6d00bb-518d-41ef-83d5-174fc5e70c74-kube-api-access-tbmw6\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.616507 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6d00bb-518d-41ef-83d5-174fc5e70c74-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.746801 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerID="610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5" exitCode=0 Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.746842 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerDied","Data":"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5"} Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.746884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d428q" event={"ID":"8b6d00bb-518d-41ef-83d5-174fc5e70c74","Type":"ContainerDied","Data":"7e006323ba5023f23fac39e3e683ef2ec6854cb93d39f71ca8221224bbf43b3b"} Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.746909 4782 scope.go:117] "RemoveContainer" containerID="610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.746935 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d428q" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.769294 4782 scope.go:117] "RemoveContainer" containerID="400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.785961 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.796149 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d428q"] Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.804819 4782 scope.go:117] "RemoveContainer" containerID="8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.862015 4782 scope.go:117] "RemoveContainer" containerID="610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5" Nov 24 12:39:53 crc kubenswrapper[4782]: E1124 12:39:53.862490 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5\": container with ID starting with 610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5 not found: ID does not exist" containerID="610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.862559 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5"} err="failed to get container status \"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5\": rpc error: code = NotFound desc = could not find container \"610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5\": container with ID starting with 610e6bc18ec9042bbd3941b56d9cdcfd031249cd49638617daf8fd114a2727c5 not found: ID does not exist" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.862611 4782 scope.go:117] "RemoveContainer" containerID="400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d" Nov 24 12:39:53 crc kubenswrapper[4782]: E1124 12:39:53.863062 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d\": container with ID starting with 400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d not found: ID does not exist" containerID="400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.863098 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d"} err="failed to get container status \"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d\": rpc error: code = NotFound desc = could not find container \"400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d\": container with ID starting with 400b578b4cc548ba52d091543e884b5e665528e6606b5cbadc67d664a7bca59d not found: ID does not exist" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.863119 4782 scope.go:117] "RemoveContainer" containerID="8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25" Nov 24 12:39:53 crc kubenswrapper[4782]: E1124 12:39:53.863530 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25\": container with ID starting with 8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25 not found: ID does not exist" containerID="8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25" Nov 24 12:39:53 crc kubenswrapper[4782]: I1124 12:39:53.863561 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25"} err="failed to get container status \"8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25\": rpc error: code = NotFound desc = could not find container \"8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25\": container with ID starting with 8679929d76fb3c278554b38e76cce16bfb2fffaca8e64fe5472924ae1ac95a25 not found: ID does not exist" Nov 24 12:39:55 crc kubenswrapper[4782]: I1124 12:39:55.501710 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" path="/var/lib/kubelet/pods/8b6d00bb-518d-41ef-83d5-174fc5e70c74/volumes" Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.410879 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.411501 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.411553 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.412263 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.412315 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd" gracePeriod=600 Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.821195 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd" exitCode=0 Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.821279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd"} Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.821581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018"} Nov 24 12:40:00 crc kubenswrapper[4782]: I1124 12:40:00.821603 4782 scope.go:117] "RemoveContainer" containerID="51ec41bd14602ae0d8e81e207d025934f4f7af5c1aa5a59a021f1503b8c35059" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.117849 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:40:54 crc kubenswrapper[4782]: E1124 12:40:54.118913 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="extract-utilities" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.118932 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="extract-utilities" Nov 24 12:40:54 crc kubenswrapper[4782]: E1124 12:40:54.118953 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="extract-content" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.118960 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="extract-content" Nov 24 12:40:54 crc kubenswrapper[4782]: E1124 12:40:54.118975 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="registry-server" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.118982 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="registry-server" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.119213 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6d00bb-518d-41ef-83d5-174fc5e70c74" containerName="registry-server" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.120971 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.135940 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.236259 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.236303 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.236431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztw87\" (UniqueName: \"kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.338659 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztw87\" (UniqueName: \"kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.338801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.338836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.339471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.340065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.365881 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztw87\" (UniqueName: \"kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87\") pod \"community-operators-p26xw\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:54 crc kubenswrapper[4782]: I1124 12:40:54.480458 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:40:55 crc kubenswrapper[4782]: I1124 12:40:55.050229 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:40:55 crc kubenswrapper[4782]: I1124 12:40:55.292212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerStarted","Data":"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f"} Nov 24 12:40:55 crc kubenswrapper[4782]: I1124 12:40:55.292300 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerStarted","Data":"e85fbddc36a1ba829aac7d69a509fbb3622c2182a86d4494af6d627187910325"} Nov 24 12:40:56 crc kubenswrapper[4782]: I1124 12:40:56.303513 4782 generic.go:334] "Generic (PLEG): container finished" podID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerID="fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f" exitCode=0 Nov 24 12:40:56 crc kubenswrapper[4782]: I1124 12:40:56.303582 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerDied","Data":"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f"} Nov 24 12:40:57 crc kubenswrapper[4782]: I1124 12:40:57.317537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerStarted","Data":"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f"} Nov 24 12:40:59 crc kubenswrapper[4782]: I1124 12:40:59.340475 4782 generic.go:334] "Generic (PLEG): container finished" podID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerID="5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f" exitCode=0 Nov 24 12:40:59 crc kubenswrapper[4782]: I1124 12:40:59.340579 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerDied","Data":"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f"} Nov 24 12:41:00 crc kubenswrapper[4782]: I1124 12:41:00.350938 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerStarted","Data":"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15"} Nov 24 12:41:00 crc kubenswrapper[4782]: I1124 12:41:00.371837 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p26xw" podStartSLOduration=2.785983329 podStartE2EDuration="6.371820626s" podCreationTimestamp="2025-11-24 12:40:54 +0000 UTC" firstStartedPulling="2025-11-24 12:40:56.305295555 +0000 UTC m=+2705.549129324" lastFinishedPulling="2025-11-24 12:40:59.891132852 +0000 UTC m=+2709.134966621" observedRunningTime="2025-11-24 12:41:00.365188836 +0000 UTC m=+2709.609022605" watchObservedRunningTime="2025-11-24 12:41:00.371820626 +0000 UTC m=+2709.615654395" Nov 24 12:41:04 crc kubenswrapper[4782]: I1124 12:41:04.480859 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:04 crc kubenswrapper[4782]: I1124 12:41:04.481572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:04 crc kubenswrapper[4782]: I1124 12:41:04.541700 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:05 crc kubenswrapper[4782]: I1124 12:41:05.477053 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:05 crc kubenswrapper[4782]: I1124 12:41:05.524100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:41:07 crc kubenswrapper[4782]: I1124 12:41:07.434363 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p26xw" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="registry-server" containerID="cri-o://f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15" gracePeriod=2 Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.356727 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.453066 4782 generic.go:334] "Generic (PLEG): container finished" podID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerID="f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15" exitCode=0 Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.453113 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26xw" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.453124 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerDied","Data":"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15"} Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.453173 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26xw" event={"ID":"a4c13db8-c27c-4f46-86f5-d49a298ade5d","Type":"ContainerDied","Data":"e85fbddc36a1ba829aac7d69a509fbb3622c2182a86d4494af6d627187910325"} Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.453201 4782 scope.go:117] "RemoveContainer" containerID="f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.475599 4782 scope.go:117] "RemoveContainer" containerID="5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.506619 4782 scope.go:117] "RemoveContainer" containerID="fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.531567 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities\") pod \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.531756 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztw87\" (UniqueName: \"kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87\") pod \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.531846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content\") pod \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\" (UID: \"a4c13db8-c27c-4f46-86f5-d49a298ade5d\") " Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.532564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities" (OuterVolumeSpecName: "utilities") pod "a4c13db8-c27c-4f46-86f5-d49a298ade5d" (UID: "a4c13db8-c27c-4f46-86f5-d49a298ade5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.538031 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87" (OuterVolumeSpecName: "kube-api-access-ztw87") pod "a4c13db8-c27c-4f46-86f5-d49a298ade5d" (UID: "a4c13db8-c27c-4f46-86f5-d49a298ade5d"). InnerVolumeSpecName "kube-api-access-ztw87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.551800 4782 scope.go:117] "RemoveContainer" containerID="f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15" Nov 24 12:41:08 crc kubenswrapper[4782]: E1124 12:41:08.553182 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15\": container with ID starting with f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15 not found: ID does not exist" containerID="f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.553266 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15"} err="failed to get container status \"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15\": rpc error: code = NotFound desc = could not find container \"f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15\": container with ID starting with f3ae2b972987e7c6160e6278054fed8a000c18202d1cd345a2350a718499df15 not found: ID does not exist" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.553300 4782 scope.go:117] "RemoveContainer" containerID="5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f" Nov 24 12:41:08 crc kubenswrapper[4782]: E1124 12:41:08.553937 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f\": container with ID starting with 5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f not found: ID does not exist" containerID="5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.554002 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f"} err="failed to get container status \"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f\": rpc error: code = NotFound desc = could not find container \"5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f\": container with ID starting with 5137d4c2a6999cce30781643856f0e3dce58a995e23d8d7d1964bc6d185ee14f not found: ID does not exist" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.554022 4782 scope.go:117] "RemoveContainer" containerID="fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f" Nov 24 12:41:08 crc kubenswrapper[4782]: E1124 12:41:08.554356 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f\": container with ID starting with fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f not found: ID does not exist" containerID="fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.554426 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f"} err="failed to get container status \"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f\": rpc error: code = NotFound desc = could not find container \"fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f\": container with ID starting with fcf32cc95363a101f331429eeb0e3c3db912563af3cbd0ca14b1c478e71ecc7f not found: ID does not exist" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.594579 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4c13db8-c27c-4f46-86f5-d49a298ade5d" (UID: "a4c13db8-c27c-4f46-86f5-d49a298ade5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.634798 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.634841 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4c13db8-c27c-4f46-86f5-d49a298ade5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.634855 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztw87\" (UniqueName: \"kubernetes.io/projected/a4c13db8-c27c-4f46-86f5-d49a298ade5d-kube-api-access-ztw87\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.794994 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:41:08 crc kubenswrapper[4782]: I1124 12:41:08.804530 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p26xw"] Nov 24 12:41:09 crc kubenswrapper[4782]: I1124 12:41:09.501037 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" path="/var/lib/kubelet/pods/a4c13db8-c27c-4f46-86f5-d49a298ade5d/volumes" Nov 24 12:42:00 crc kubenswrapper[4782]: I1124 12:42:00.411120 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:42:00 crc kubenswrapper[4782]: I1124 12:42:00.411677 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:42:30 crc kubenswrapper[4782]: I1124 12:42:30.411300 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:42:30 crc kubenswrapper[4782]: I1124 12:42:30.412095 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:42:46 crc kubenswrapper[4782]: I1124 12:42:46.631023 4782 generic.go:334] "Generic (PLEG): container finished" podID="c9db5a23-263f-41cc-a1b6-b90df79aa8d2" containerID="06ee33c3b71f5197305afff3038283232f3ea7d8eec84f98a7e4c1763e271dba" exitCode=0 Nov 24 12:42:46 crc kubenswrapper[4782]: I1124 12:42:46.631110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" event={"ID":"c9db5a23-263f-41cc-a1b6-b90df79aa8d2","Type":"ContainerDied","Data":"06ee33c3b71f5197305afff3038283232f3ea7d8eec84f98a7e4c1763e271dba"} Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.033070 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103495 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103575 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103671 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103694 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.103727 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdpp\" (UniqueName: \"kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp\") pod \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\" (UID: \"c9db5a23-263f-41cc-a1b6-b90df79aa8d2\") " Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.109361 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp" (OuterVolumeSpecName: "kube-api-access-vwdpp") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "kube-api-access-vwdpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.109453 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.130551 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.140066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.140905 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.143304 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory" (OuterVolumeSpecName: "inventory") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.156478 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c9db5a23-263f-41cc-a1b6-b90df79aa8d2" (UID: "c9db5a23-263f-41cc-a1b6-b90df79aa8d2"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206545 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206578 4782 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206593 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206607 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206619 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206630 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.206642 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdpp\" (UniqueName: \"kubernetes.io/projected/c9db5a23-263f-41cc-a1b6-b90df79aa8d2-kube-api-access-vwdpp\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.651735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" event={"ID":"c9db5a23-263f-41cc-a1b6-b90df79aa8d2","Type":"ContainerDied","Data":"ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b"} Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.652130 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad32fe47e8a881434437b8368b03a7683db862472fdd6a3a276ac7f4bbd52e8b" Nov 24 12:42:48 crc kubenswrapper[4782]: I1124 12:42:48.652205 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-snkfr" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.410900 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.411394 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.411442 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.412191 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.412241 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" gracePeriod=600 Nov 24 12:43:00 crc kubenswrapper[4782]: E1124 12:43:00.542406 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.753275 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" exitCode=0 Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.753317 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018"} Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.753491 4782 scope.go:117] "RemoveContainer" containerID="5760a807180ac00d6e69fd65ab9ce04e6c93fd610534170bcdb83cb95567b0bd" Nov 24 12:43:00 crc kubenswrapper[4782]: I1124 12:43:00.754065 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:43:00 crc kubenswrapper[4782]: E1124 12:43:00.754447 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:43:14 crc kubenswrapper[4782]: I1124 12:43:14.491071 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:43:14 crc kubenswrapper[4782]: E1124 12:43:14.491950 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:43:25 crc kubenswrapper[4782]: I1124 12:43:25.490803 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:43:25 crc kubenswrapper[4782]: E1124 12:43:25.492215 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.119927 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:43:36 crc kubenswrapper[4782]: E1124 12:43:36.120811 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9db5a23-263f-41cc-a1b6-b90df79aa8d2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.120826 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9db5a23-263f-41cc-a1b6-b90df79aa8d2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:43:36 crc kubenswrapper[4782]: E1124 12:43:36.120838 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="registry-server" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.120844 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="registry-server" Nov 24 12:43:36 crc kubenswrapper[4782]: E1124 12:43:36.120862 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="extract-content" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.120868 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="extract-content" Nov 24 12:43:36 crc kubenswrapper[4782]: E1124 12:43:36.120893 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="extract-utilities" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.120899 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="extract-utilities" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.121070 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9db5a23-263f-41cc-a1b6-b90df79aa8d2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.121089 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c13db8-c27c-4f46-86f5-d49a298ade5d" containerName="registry-server" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.121739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.123766 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.124034 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.124408 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2kh2c" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.123774 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.162727 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.198886 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.199026 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.199155 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301449 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtpt\" (UniqueName: \"kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301661 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301709 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301749 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301813 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.301858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.302851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.302854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.311603 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404484 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404588 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtpt\" (UniqueName: \"kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404776 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404893 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.404987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.405684 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.406076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.406907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.414256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.414628 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.426141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtpt\" (UniqueName: \"kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.467421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.489068 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.923022 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:43:36 crc kubenswrapper[4782]: I1124 12:43:36.929119 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:43:37 crc kubenswrapper[4782]: I1124 12:43:37.046390 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bf2749fb-4ae8-43f8-847e-3d4528d4556a","Type":"ContainerStarted","Data":"dc2c70927f24706eff44112b72e7097c0630e47129b464bee57eb1066dde3416"} Nov 24 12:43:40 crc kubenswrapper[4782]: I1124 12:43:40.491448 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:43:40 crc kubenswrapper[4782]: E1124 12:43:40.492488 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:43:53 crc kubenswrapper[4782]: I1124 12:43:53.491476 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:43:53 crc kubenswrapper[4782]: E1124 12:43:53.492201 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:44:07 crc kubenswrapper[4782]: I1124 12:44:07.490805 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:44:07 crc kubenswrapper[4782]: E1124 12:44:07.491840 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:44:13 crc kubenswrapper[4782]: E1124 12:44:13.139684 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 12:44:13 crc kubenswrapper[4782]: E1124 12:44:13.148036 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqtpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(bf2749fb-4ae8-43f8-847e-3d4528d4556a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:13 crc kubenswrapper[4782]: E1124 12:44:13.149645 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" Nov 24 12:44:13 crc kubenswrapper[4782]: E1124 12:44:13.384639 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" Nov 24 12:44:21 crc kubenswrapper[4782]: I1124 12:44:21.500047 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:44:21 crc kubenswrapper[4782]: E1124 12:44:21.500975 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:44:29 crc kubenswrapper[4782]: I1124 12:44:29.183686 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:44:31 crc kubenswrapper[4782]: I1124 12:44:31.542784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bf2749fb-4ae8-43f8-847e-3d4528d4556a","Type":"ContainerStarted","Data":"b328d3318b070df519dc60f64a5f5d7966e3aae48d1478a79be8aba8fdfe63ca"} Nov 24 12:44:31 crc kubenswrapper[4782]: I1124 12:44:31.573139 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.321295067 podStartE2EDuration="57.573117477s" podCreationTimestamp="2025-11-24 12:43:34 +0000 UTC" firstStartedPulling="2025-11-24 12:43:36.928889002 +0000 UTC m=+2866.172722771" lastFinishedPulling="2025-11-24 12:44:29.180711412 +0000 UTC m=+2918.424545181" observedRunningTime="2025-11-24 12:44:31.565949953 +0000 UTC m=+2920.809783742" watchObservedRunningTime="2025-11-24 12:44:31.573117477 +0000 UTC m=+2920.816951256" Nov 24 12:44:34 crc kubenswrapper[4782]: I1124 12:44:34.491496 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:44:34 crc kubenswrapper[4782]: E1124 12:44:34.492802 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:44:48 crc kubenswrapper[4782]: I1124 12:44:48.490732 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:44:48 crc kubenswrapper[4782]: E1124 12:44:48.491546 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.196482 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v"] Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.198939 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.203501 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.204480 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.217288 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v"] Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.332689 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.333088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.333176 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwp4\" (UniqueName: \"kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.435475 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.436192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwp4\" (UniqueName: \"kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.436496 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.436626 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.442699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.459752 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwp4\" (UniqueName: \"kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4\") pod \"collect-profiles-29399805-8458v\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:00 crc kubenswrapper[4782]: I1124 12:45:00.527244 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:01 crc kubenswrapper[4782]: I1124 12:45:01.122747 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v"] Nov 24 12:45:01 crc kubenswrapper[4782]: I1124 12:45:01.499645 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:45:01 crc kubenswrapper[4782]: E1124 12:45:01.500209 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:45:01 crc kubenswrapper[4782]: I1124 12:45:01.826033 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" event={"ID":"93aafd1b-f98c-47c7-9d20-84177e08abb5","Type":"ContainerStarted","Data":"fa81fd008d4bd1d9b2a25a56abafd2bbe4ce0c17d5b578d6e7a4acaf26d71ec8"} Nov 24 12:45:01 crc kubenswrapper[4782]: I1124 12:45:01.826094 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" event={"ID":"93aafd1b-f98c-47c7-9d20-84177e08abb5","Type":"ContainerStarted","Data":"8b43786f306c62addc9ddd97251e7d686523429ea5999476dd86fc0addde7f4c"} Nov 24 12:45:01 crc kubenswrapper[4782]: I1124 12:45:01.854784 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" podStartSLOduration=1.854754308 podStartE2EDuration="1.854754308s" podCreationTimestamp="2025-11-24 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:01.847793419 +0000 UTC m=+2951.091627188" watchObservedRunningTime="2025-11-24 12:45:01.854754308 +0000 UTC m=+2951.098588077" Nov 24 12:45:02 crc kubenswrapper[4782]: I1124 12:45:02.835962 4782 generic.go:334] "Generic (PLEG): container finished" podID="93aafd1b-f98c-47c7-9d20-84177e08abb5" containerID="fa81fd008d4bd1d9b2a25a56abafd2bbe4ce0c17d5b578d6e7a4acaf26d71ec8" exitCode=0 Nov 24 12:45:02 crc kubenswrapper[4782]: I1124 12:45:02.836338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" event={"ID":"93aafd1b-f98c-47c7-9d20-84177e08abb5","Type":"ContainerDied","Data":"fa81fd008d4bd1d9b2a25a56abafd2bbe4ce0c17d5b578d6e7a4acaf26d71ec8"} Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.266528 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.444328 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume\") pod \"93aafd1b-f98c-47c7-9d20-84177e08abb5\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.444514 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwp4\" (UniqueName: \"kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4\") pod \"93aafd1b-f98c-47c7-9d20-84177e08abb5\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.444560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume\") pod \"93aafd1b-f98c-47c7-9d20-84177e08abb5\" (UID: \"93aafd1b-f98c-47c7-9d20-84177e08abb5\") " Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.447247 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume" (OuterVolumeSpecName: "config-volume") pod "93aafd1b-f98c-47c7-9d20-84177e08abb5" (UID: "93aafd1b-f98c-47c7-9d20-84177e08abb5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.461203 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4" (OuterVolumeSpecName: "kube-api-access-xkwp4") pod "93aafd1b-f98c-47c7-9d20-84177e08abb5" (UID: "93aafd1b-f98c-47c7-9d20-84177e08abb5"). InnerVolumeSpecName "kube-api-access-xkwp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.465629 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93aafd1b-f98c-47c7-9d20-84177e08abb5" (UID: "93aafd1b-f98c-47c7-9d20-84177e08abb5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.547885 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93aafd1b-f98c-47c7-9d20-84177e08abb5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.547927 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwp4\" (UniqueName: \"kubernetes.io/projected/93aafd1b-f98c-47c7-9d20-84177e08abb5-kube-api-access-xkwp4\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.547963 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93aafd1b-f98c-47c7-9d20-84177e08abb5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.600538 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd"] Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.607867 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-tk2xd"] Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.858985 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.858910 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-8458v" event={"ID":"93aafd1b-f98c-47c7-9d20-84177e08abb5","Type":"ContainerDied","Data":"8b43786f306c62addc9ddd97251e7d686523429ea5999476dd86fc0addde7f4c"} Nov 24 12:45:04 crc kubenswrapper[4782]: I1124 12:45:04.864678 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b43786f306c62addc9ddd97251e7d686523429ea5999476dd86fc0addde7f4c" Nov 24 12:45:05 crc kubenswrapper[4782]: I1124 12:45:05.540540 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e616a4-be07-4f13-a359-dcfcdbdec622" path="/var/lib/kubelet/pods/30e616a4-be07-4f13-a359-dcfcdbdec622/volumes" Nov 24 12:45:15 crc kubenswrapper[4782]: I1124 12:45:15.491841 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:45:15 crc kubenswrapper[4782]: E1124 12:45:15.492795 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:45:29 crc kubenswrapper[4782]: I1124 12:45:29.490968 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:45:29 crc kubenswrapper[4782]: E1124 12:45:29.491749 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:45:40 crc kubenswrapper[4782]: I1124 12:45:40.491538 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:45:40 crc kubenswrapper[4782]: E1124 12:45:40.493451 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:45:52 crc kubenswrapper[4782]: I1124 12:45:52.490656 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:45:52 crc kubenswrapper[4782]: E1124 12:45:52.491406 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:46:04 crc kubenswrapper[4782]: I1124 12:46:04.955475 4782 scope.go:117] "RemoveContainer" containerID="aba55618d39af55f7ec67a8bbc7f661dba18d364d30be1cc148219607bb50980" Nov 24 12:46:06 crc kubenswrapper[4782]: I1124 12:46:06.492585 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:46:06 crc kubenswrapper[4782]: E1124 12:46:06.493308 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:46:20 crc kubenswrapper[4782]: I1124 12:46:20.490742 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:46:20 crc kubenswrapper[4782]: E1124 12:46:20.491501 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:46:31 crc kubenswrapper[4782]: I1124 12:46:31.496461 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:46:31 crc kubenswrapper[4782]: E1124 12:46:31.497219 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:46:43 crc kubenswrapper[4782]: I1124 12:46:43.491156 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:46:43 crc kubenswrapper[4782]: E1124 12:46:43.492866 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:46:54 crc kubenswrapper[4782]: I1124 12:46:54.490495 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:46:54 crc kubenswrapper[4782]: E1124 12:46:54.491129 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:47:07 crc kubenswrapper[4782]: I1124 12:47:07.491124 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:47:07 crc kubenswrapper[4782]: E1124 12:47:07.492012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:47:13 crc kubenswrapper[4782]: I1124 12:47:13.970739 4782 generic.go:334] "Generic (PLEG): container finished" podID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" containerID="b328d3318b070df519dc60f64a5f5d7966e3aae48d1478a79be8aba8fdfe63ca" exitCode=0 Nov 24 12:47:13 crc kubenswrapper[4782]: I1124 12:47:13.970833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bf2749fb-4ae8-43f8-847e-3d4528d4556a","Type":"ContainerDied","Data":"b328d3318b070df519dc60f64a5f5d7966e3aae48d1478a79be8aba8fdfe63ca"} Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.419504 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595110 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtpt\" (UniqueName: \"kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595204 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595334 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595865 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.595915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret\") pod \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\" (UID: \"bf2749fb-4ae8-43f8-847e-3d4528d4556a\") " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.596078 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.596628 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.596991 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data" (OuterVolumeSpecName: "config-data") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.602531 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.610759 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt" (OuterVolumeSpecName: "kube-api-access-wqtpt") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "kube-api-access-wqtpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.619744 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.627685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.630494 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.630630 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.656067 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bf2749fb-4ae8-43f8-847e-3d4528d4556a" (UID: "bf2749fb-4ae8-43f8-847e-3d4528d4556a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698311 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698599 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698700 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698764 4782 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698823 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698888 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf2749fb-4ae8-43f8-847e-3d4528d4556a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.698952 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bf2749fb-4ae8-43f8-847e-3d4528d4556a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.699016 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtpt\" (UniqueName: \"kubernetes.io/projected/bf2749fb-4ae8-43f8-847e-3d4528d4556a-kube-api-access-wqtpt\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.720455 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.800734 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.998952 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bf2749fb-4ae8-43f8-847e-3d4528d4556a","Type":"ContainerDied","Data":"dc2c70927f24706eff44112b72e7097c0630e47129b464bee57eb1066dde3416"} Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.998978 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:47:15 crc kubenswrapper[4782]: I1124 12:47:15.998994 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2c70927f24706eff44112b72e7097c0630e47129b464bee57eb1066dde3416" Nov 24 12:47:18 crc kubenswrapper[4782]: I1124 12:47:18.491345 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:47:18 crc kubenswrapper[4782]: E1124 12:47:18.492261 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.936797 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:47:22 crc kubenswrapper[4782]: E1124 12:47:22.937717 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.937735 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:47:22 crc kubenswrapper[4782]: E1124 12:47:22.937757 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93aafd1b-f98c-47c7-9d20-84177e08abb5" containerName="collect-profiles" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.937765 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="93aafd1b-f98c-47c7-9d20-84177e08abb5" containerName="collect-profiles" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.937937 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2749fb-4ae8-43f8-847e-3d4528d4556a" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.937962 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="93aafd1b-f98c-47c7-9d20-84177e08abb5" containerName="collect-profiles" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.938552 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.940363 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2kh2c" Nov 24 12:47:22 crc kubenswrapper[4782]: I1124 12:47:22.944363 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.138522 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrsrj\" (UniqueName: \"kubernetes.io/projected/2cac3222-c76a-4e73-8333-38f146cec5c9-kube-api-access-vrsrj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.138910 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.240930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.241097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrsrj\" (UniqueName: \"kubernetes.io/projected/2cac3222-c76a-4e73-8333-38f146cec5c9-kube-api-access-vrsrj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.241434 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.258983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrsrj\" (UniqueName: \"kubernetes.io/projected/2cac3222-c76a-4e73-8333-38f146cec5c9-kube-api-access-vrsrj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.266191 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2cac3222-c76a-4e73-8333-38f146cec5c9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.274040 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:47:23 crc kubenswrapper[4782]: I1124 12:47:23.718525 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:47:24 crc kubenswrapper[4782]: I1124 12:47:24.058465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2cac3222-c76a-4e73-8333-38f146cec5c9","Type":"ContainerStarted","Data":"b8071939a394e3b7f8ff3160525bf2edf4cdb8cd067ee536cacecc46b41cb599"} Nov 24 12:47:25 crc kubenswrapper[4782]: I1124 12:47:25.068352 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2cac3222-c76a-4e73-8333-38f146cec5c9","Type":"ContainerStarted","Data":"0155529a99ff8b0f4480f7c126dbbf71a5ad1fe708ef650199da8c27f44cb131"} Nov 24 12:47:25 crc kubenswrapper[4782]: I1124 12:47:25.088461 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.125339797 podStartE2EDuration="3.088444071s" podCreationTimestamp="2025-11-24 12:47:22 +0000 UTC" firstStartedPulling="2025-11-24 12:47:23.73578291 +0000 UTC m=+3092.979616679" lastFinishedPulling="2025-11-24 12:47:24.698887184 +0000 UTC m=+3093.942720953" observedRunningTime="2025-11-24 12:47:25.080616779 +0000 UTC m=+3094.324450568" watchObservedRunningTime="2025-11-24 12:47:25.088444071 +0000 UTC m=+3094.332277840" Nov 24 12:47:33 crc kubenswrapper[4782]: I1124 12:47:33.492128 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:47:33 crc kubenswrapper[4782]: E1124 12:47:33.492888 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.446684 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kvrl6/must-gather-h6t7l"] Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.449098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.456750 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kvrl6"/"default-dockercfg-k6j84" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.459799 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kvrl6"/"openshift-service-ca.crt" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.463158 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kvrl6"/"kube-root-ca.crt" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.582154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kvrl6/must-gather-h6t7l"] Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.623866 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjrh\" (UniqueName: \"kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.623987 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.725687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjrh\" (UniqueName: \"kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.725769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.726231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.749120 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjrh\" (UniqueName: \"kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh\") pod \"must-gather-h6t7l\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:47 crc kubenswrapper[4782]: I1124 12:47:47.778304 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:47:48 crc kubenswrapper[4782]: I1124 12:47:48.276420 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kvrl6/must-gather-h6t7l"] Nov 24 12:47:48 crc kubenswrapper[4782]: I1124 12:47:48.491677 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:47:48 crc kubenswrapper[4782]: E1124 12:47:48.491974 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:47:49 crc kubenswrapper[4782]: I1124 12:47:49.279915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" event={"ID":"7a964d21-2595-47ee-ae52-2d0677bd25eb","Type":"ContainerStarted","Data":"a4b42ba9d03848a8d5cdbe4227ee4bb6c5110dd62f6b4fc74bdb2c68d54e4287"} Nov 24 12:47:53 crc kubenswrapper[4782]: I1124 12:47:53.319031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" event={"ID":"7a964d21-2595-47ee-ae52-2d0677bd25eb","Type":"ContainerStarted","Data":"6d71022feea65a0023180470a98ba8a9a2b8d64d405cd839ae57862457b63455"} Nov 24 12:47:53 crc kubenswrapper[4782]: I1124 12:47:53.319652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" event={"ID":"7a964d21-2595-47ee-ae52-2d0677bd25eb","Type":"ContainerStarted","Data":"3f7af836540877b1356f36ec29b8ba93d46cbeb89f1a2c0c768054d4bfa6feea"} Nov 24 12:47:53 crc kubenswrapper[4782]: I1124 12:47:53.342827 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" podStartSLOduration=2.091879754 podStartE2EDuration="6.342804123s" podCreationTimestamp="2025-11-24 12:47:47 +0000 UTC" firstStartedPulling="2025-11-24 12:47:48.283561379 +0000 UTC m=+3117.527395148" lastFinishedPulling="2025-11-24 12:47:52.534485738 +0000 UTC m=+3121.778319517" observedRunningTime="2025-11-24 12:47:53.338210828 +0000 UTC m=+3122.582044607" watchObservedRunningTime="2025-11-24 12:47:53.342804123 +0000 UTC m=+3122.586637892" Nov 24 12:47:56 crc kubenswrapper[4782]: I1124 12:47:56.964906 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-9n9dm"] Nov 24 12:47:56 crc kubenswrapper[4782]: I1124 12:47:56.967391 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.129851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.130043 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66j6x\" (UniqueName: \"kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.231877 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66j6x\" (UniqueName: \"kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.231989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.232094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.261310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66j6x\" (UniqueName: \"kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x\") pod \"crc-debug-9n9dm\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.287105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:47:57 crc kubenswrapper[4782]: W1124 12:47:57.329237 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbbeb678_96ec_457b_b396_376da5fdd169.slice/crio-cd8eb42fd6970e8b6cac08b894c8fdcd7c647294d9b04b67026e7bd34a94603c WatchSource:0}: Error finding container cd8eb42fd6970e8b6cac08b894c8fdcd7c647294d9b04b67026e7bd34a94603c: Status 404 returned error can't find the container with id cd8eb42fd6970e8b6cac08b894c8fdcd7c647294d9b04b67026e7bd34a94603c Nov 24 12:47:57 crc kubenswrapper[4782]: I1124 12:47:57.352078 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" event={"ID":"dbbeb678-96ec-457b-b396-376da5fdd169","Type":"ContainerStarted","Data":"cd8eb42fd6970e8b6cac08b894c8fdcd7c647294d9b04b67026e7bd34a94603c"} Nov 24 12:48:01 crc kubenswrapper[4782]: I1124 12:48:01.491263 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:48:02 crc kubenswrapper[4782]: I1124 12:48:02.416550 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c"} Nov 24 12:48:12 crc kubenswrapper[4782]: I1124 12:48:12.517403 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" event={"ID":"dbbeb678-96ec-457b-b396-376da5fdd169","Type":"ContainerStarted","Data":"385be2ab97924ce05f181a45f48786164b0cdb6144f1a0371b65925e1d41f483"} Nov 24 12:48:12 crc kubenswrapper[4782]: I1124 12:48:12.534805 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" podStartSLOduration=2.560563257 podStartE2EDuration="16.534787534s" podCreationTimestamp="2025-11-24 12:47:56 +0000 UTC" firstStartedPulling="2025-11-24 12:47:57.331635292 +0000 UTC m=+3126.575469051" lastFinishedPulling="2025-11-24 12:48:11.305859559 +0000 UTC m=+3140.549693328" observedRunningTime="2025-11-24 12:48:12.530294122 +0000 UTC m=+3141.774127891" watchObservedRunningTime="2025-11-24 12:48:12.534787534 +0000 UTC m=+3141.778621303" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.488571 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.491182 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.513767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.578389 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jxh\" (UniqueName: \"kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.578531 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.579942 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.682136 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jxh\" (UniqueName: \"kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.682598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.682962 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.683135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.683447 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.705168 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jxh\" (UniqueName: \"kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh\") pod \"certified-operators-b5ww8\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:46 crc kubenswrapper[4782]: I1124 12:48:46.809839 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:47 crc kubenswrapper[4782]: I1124 12:48:47.567872 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:48:47 crc kubenswrapper[4782]: I1124 12:48:47.874450 4782 generic.go:334] "Generic (PLEG): container finished" podID="0005f2e7-f850-4e15-80ab-136a218e9880" containerID="103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc" exitCode=0 Nov 24 12:48:47 crc kubenswrapper[4782]: I1124 12:48:47.874768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerDied","Data":"103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc"} Nov 24 12:48:47 crc kubenswrapper[4782]: I1124 12:48:47.874799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerStarted","Data":"26b5c2561f16b2ae87feb947ae3b5c6d820ca6e15a50bfb09a72246f53dd8045"} Nov 24 12:48:47 crc kubenswrapper[4782]: I1124 12:48:47.877010 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:48:48 crc kubenswrapper[4782]: I1124 12:48:48.884635 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerStarted","Data":"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13"} Nov 24 12:48:52 crc kubenswrapper[4782]: I1124 12:48:52.918651 4782 generic.go:334] "Generic (PLEG): container finished" podID="0005f2e7-f850-4e15-80ab-136a218e9880" containerID="a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13" exitCode=0 Nov 24 12:48:52 crc kubenswrapper[4782]: I1124 12:48:52.918736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerDied","Data":"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13"} Nov 24 12:48:53 crc kubenswrapper[4782]: I1124 12:48:53.930636 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerStarted","Data":"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd"} Nov 24 12:48:53 crc kubenswrapper[4782]: I1124 12:48:53.964595 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5ww8" podStartSLOduration=2.25137715 podStartE2EDuration="7.964567001s" podCreationTimestamp="2025-11-24 12:48:46 +0000 UTC" firstStartedPulling="2025-11-24 12:48:47.876756988 +0000 UTC m=+3177.120590757" lastFinishedPulling="2025-11-24 12:48:53.589946839 +0000 UTC m=+3182.833780608" observedRunningTime="2025-11-24 12:48:53.957637663 +0000 UTC m=+3183.201471422" watchObservedRunningTime="2025-11-24 12:48:53.964567001 +0000 UTC m=+3183.208400770" Nov 24 12:48:56 crc kubenswrapper[4782]: I1124 12:48:56.809941 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:56 crc kubenswrapper[4782]: I1124 12:48:56.810474 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:48:57 crc kubenswrapper[4782]: I1124 12:48:57.863882 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-b5ww8" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:57 crc kubenswrapper[4782]: > Nov 24 12:49:02 crc kubenswrapper[4782]: I1124 12:49:02.000456 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" event={"ID":"dbbeb678-96ec-457b-b396-376da5fdd169","Type":"ContainerDied","Data":"385be2ab97924ce05f181a45f48786164b0cdb6144f1a0371b65925e1d41f483"} Nov 24 12:49:02 crc kubenswrapper[4782]: I1124 12:49:02.000361 4782 generic.go:334] "Generic (PLEG): container finished" podID="dbbeb678-96ec-457b-b396-376da5fdd169" containerID="385be2ab97924ce05f181a45f48786164b0cdb6144f1a0371b65925e1d41f483" exitCode=0 Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.140053 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.192459 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-9n9dm"] Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.201624 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-9n9dm"] Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.214482 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66j6x\" (UniqueName: \"kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x\") pod \"dbbeb678-96ec-457b-b396-376da5fdd169\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.214650 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host\") pod \"dbbeb678-96ec-457b-b396-376da5fdd169\" (UID: \"dbbeb678-96ec-457b-b396-376da5fdd169\") " Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.215019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host" (OuterVolumeSpecName: "host") pod "dbbeb678-96ec-457b-b396-376da5fdd169" (UID: "dbbeb678-96ec-457b-b396-376da5fdd169"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.224541 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x" (OuterVolumeSpecName: "kube-api-access-66j6x") pod "dbbeb678-96ec-457b-b396-376da5fdd169" (UID: "dbbeb678-96ec-457b-b396-376da5fdd169"). InnerVolumeSpecName "kube-api-access-66j6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.316517 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66j6x\" (UniqueName: \"kubernetes.io/projected/dbbeb678-96ec-457b-b396-376da5fdd169-kube-api-access-66j6x\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.316797 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dbbeb678-96ec-457b-b396-376da5fdd169-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:03 crc kubenswrapper[4782]: I1124 12:49:03.503031 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbeb678-96ec-457b-b396-376da5fdd169" path="/var/lib/kubelet/pods/dbbeb678-96ec-457b-b396-376da5fdd169/volumes" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.025839 4782 scope.go:117] "RemoveContainer" containerID="385be2ab97924ce05f181a45f48786164b0cdb6144f1a0371b65925e1d41f483" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.025847 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-9n9dm" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.394428 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-f47wh"] Nov 24 12:49:04 crc kubenswrapper[4782]: E1124 12:49:04.395781 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbeb678-96ec-457b-b396-376da5fdd169" containerName="container-00" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.395805 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbeb678-96ec-457b-b396-376da5fdd169" containerName="container-00" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.396044 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbeb678-96ec-457b-b396-376da5fdd169" containerName="container-00" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.397288 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.541630 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.541748 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk65w\" (UniqueName: \"kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.644066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.644165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk65w\" (UniqueName: \"kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.645466 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.669503 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk65w\" (UniqueName: \"kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w\") pod \"crc-debug-f47wh\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:04 crc kubenswrapper[4782]: I1124 12:49:04.721631 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:05 crc kubenswrapper[4782]: I1124 12:49:05.042796 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" event={"ID":"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa","Type":"ContainerStarted","Data":"280554e1cc7845c5c2f5772be7266924f32f595c96c5fb41e17acec9ce7f79f8"} Nov 24 12:49:05 crc kubenswrapper[4782]: I1124 12:49:05.043248 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" event={"ID":"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa","Type":"ContainerStarted","Data":"60dbf5fafd0e37d960e6fe3b55e225639cf4c028c12562d8dea0593ca60893e4"} Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.062367 4782 generic.go:334] "Generic (PLEG): container finished" podID="b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" containerID="280554e1cc7845c5c2f5772be7266924f32f595c96c5fb41e17acec9ce7f79f8" exitCode=0 Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.063199 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" event={"ID":"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa","Type":"ContainerDied","Data":"280554e1cc7845c5c2f5772be7266924f32f595c96c5fb41e17acec9ce7f79f8"} Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.588618 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-f47wh"] Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.597409 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-f47wh"] Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.864113 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:49:06 crc kubenswrapper[4782]: I1124 12:49:06.915117 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.105087 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.193565 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.302257 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host\") pod \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.302316 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk65w\" (UniqueName: \"kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w\") pod \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\" (UID: \"b77d6627-bb10-49d1-b8f4-bbe49d6d81fa\") " Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.302394 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host" (OuterVolumeSpecName: "host") pod "b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" (UID: "b77d6627-bb10-49d1-b8f4-bbe49d6d81fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.302805 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.307429 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w" (OuterVolumeSpecName: "kube-api-access-zk65w") pod "b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" (UID: "b77d6627-bb10-49d1-b8f4-bbe49d6d81fa"). InnerVolumeSpecName "kube-api-access-zk65w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.404801 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk65w\" (UniqueName: \"kubernetes.io/projected/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa-kube-api-access-zk65w\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.503553 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" path="/var/lib/kubelet/pods/b77d6627-bb10-49d1-b8f4-bbe49d6d81fa/volumes" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.763931 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-j57lv"] Nov 24 12:49:07 crc kubenswrapper[4782]: E1124 12:49:07.764770 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" containerName="container-00" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.764794 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" containerName="container-00" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.765069 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77d6627-bb10-49d1-b8f4-bbe49d6d81fa" containerName="container-00" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.765890 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.913871 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:07 crc kubenswrapper[4782]: I1124 12:49:07.914008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n65p\" (UniqueName: \"kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.015594 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.015729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n65p\" (UniqueName: \"kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.016241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.032448 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n65p\" (UniqueName: \"kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p\") pod \"crc-debug-j57lv\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.085653 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5ww8" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="registry-server" containerID="cri-o://0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd" gracePeriod=2 Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.086160 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-f47wh" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.086239 4782 scope.go:117] "RemoveContainer" containerID="280554e1cc7845c5c2f5772be7266924f32f595c96c5fb41e17acec9ce7f79f8" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.096364 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:08 crc kubenswrapper[4782]: W1124 12:49:08.133097 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3be36ee_1f0f_4fb4_88ba_ea564acb28c3.slice/crio-df567ecdfb41c8447df7a2291a749328904a5e1b03703a64d8e4dc6a2e1f9759 WatchSource:0}: Error finding container df567ecdfb41c8447df7a2291a749328904a5e1b03703a64d8e4dc6a2e1f9759: Status 404 returned error can't find the container with id df567ecdfb41c8447df7a2291a749328904a5e1b03703a64d8e4dc6a2e1f9759 Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.482796 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.630331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2jxh\" (UniqueName: \"kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh\") pod \"0005f2e7-f850-4e15-80ab-136a218e9880\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.630437 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content\") pod \"0005f2e7-f850-4e15-80ab-136a218e9880\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.630639 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities\") pod \"0005f2e7-f850-4e15-80ab-136a218e9880\" (UID: \"0005f2e7-f850-4e15-80ab-136a218e9880\") " Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.637888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities" (OuterVolumeSpecName: "utilities") pod "0005f2e7-f850-4e15-80ab-136a218e9880" (UID: "0005f2e7-f850-4e15-80ab-136a218e9880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.637918 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh" (OuterVolumeSpecName: "kube-api-access-x2jxh") pod "0005f2e7-f850-4e15-80ab-136a218e9880" (UID: "0005f2e7-f850-4e15-80ab-136a218e9880"). InnerVolumeSpecName "kube-api-access-x2jxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.702048 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0005f2e7-f850-4e15-80ab-136a218e9880" (UID: "0005f2e7-f850-4e15-80ab-136a218e9880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.732916 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2jxh\" (UniqueName: \"kubernetes.io/projected/0005f2e7-f850-4e15-80ab-136a218e9880-kube-api-access-x2jxh\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.732961 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:08 crc kubenswrapper[4782]: I1124 12:49:08.732973 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005f2e7-f850-4e15-80ab-136a218e9880-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.105606 4782 generic.go:334] "Generic (PLEG): container finished" podID="c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" containerID="918df772327123a8e2cb96b5cb654b337dc3d93eb6d5c0340e8bb98f9cac67f3" exitCode=0 Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.105732 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" event={"ID":"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3","Type":"ContainerDied","Data":"918df772327123a8e2cb96b5cb654b337dc3d93eb6d5c0340e8bb98f9cac67f3"} Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.106440 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" event={"ID":"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3","Type":"ContainerStarted","Data":"df567ecdfb41c8447df7a2291a749328904a5e1b03703a64d8e4dc6a2e1f9759"} Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.109514 4782 generic.go:334] "Generic (PLEG): container finished" podID="0005f2e7-f850-4e15-80ab-136a218e9880" containerID="0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd" exitCode=0 Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.109626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerDied","Data":"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd"} Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.109822 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5ww8" event={"ID":"0005f2e7-f850-4e15-80ab-136a218e9880","Type":"ContainerDied","Data":"26b5c2561f16b2ae87feb947ae3b5c6d820ca6e15a50bfb09a72246f53dd8045"} Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.109710 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5ww8" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.109921 4782 scope.go:117] "RemoveContainer" containerID="0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.159730 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-j57lv"] Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.173990 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kvrl6/crc-debug-j57lv"] Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.193036 4782 scope.go:117] "RemoveContainer" containerID="a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.200836 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.211463 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5ww8"] Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.222535 4782 scope.go:117] "RemoveContainer" containerID="103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.272337 4782 scope.go:117] "RemoveContainer" containerID="0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd" Nov 24 12:49:09 crc kubenswrapper[4782]: E1124 12:49:09.272810 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd\": container with ID starting with 0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd not found: ID does not exist" containerID="0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.272857 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd"} err="failed to get container status \"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd\": rpc error: code = NotFound desc = could not find container \"0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd\": container with ID starting with 0989fc4bd81b5ed7a72a0093b6fd09b603da9f0b95edc89ff9d9caecbc74a4dd not found: ID does not exist" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.272883 4782 scope.go:117] "RemoveContainer" containerID="a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13" Nov 24 12:49:09 crc kubenswrapper[4782]: E1124 12:49:09.273317 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13\": container with ID starting with a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13 not found: ID does not exist" containerID="a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.273358 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13"} err="failed to get container status \"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13\": rpc error: code = NotFound desc = could not find container \"a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13\": container with ID starting with a47d97fd04aaca9a01bad4317dad2b09cb58ed7e750e426052e002dd89812c13 not found: ID does not exist" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.273649 4782 scope.go:117] "RemoveContainer" containerID="103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc" Nov 24 12:49:09 crc kubenswrapper[4782]: E1124 12:49:09.274284 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc\": container with ID starting with 103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc not found: ID does not exist" containerID="103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.274499 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc"} err="failed to get container status \"103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc\": rpc error: code = NotFound desc = could not find container \"103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc\": container with ID starting with 103cb8544a78f605b895eb81ea6931864290d3aed328dc0d24c10c09e4fda0cc not found: ID does not exist" Nov 24 12:49:09 crc kubenswrapper[4782]: I1124 12:49:09.504489 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" path="/var/lib/kubelet/pods/0005f2e7-f850-4e15-80ab-136a218e9880/volumes" Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.232642 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.364743 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host\") pod \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.364847 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n65p\" (UniqueName: \"kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p\") pod \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\" (UID: \"c3be36ee-1f0f-4fb4-88ba-ea564acb28c3\") " Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.364881 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host" (OuterVolumeSpecName: "host") pod "c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" (UID: "c3be36ee-1f0f-4fb4-88ba-ea564acb28c3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.365983 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.373563 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p" (OuterVolumeSpecName: "kube-api-access-8n65p") pod "c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" (UID: "c3be36ee-1f0f-4fb4-88ba-ea564acb28c3"). InnerVolumeSpecName "kube-api-access-8n65p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:10 crc kubenswrapper[4782]: I1124 12:49:10.467790 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n65p\" (UniqueName: \"kubernetes.io/projected/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3-kube-api-access-8n65p\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:11 crc kubenswrapper[4782]: I1124 12:49:11.132510 4782 scope.go:117] "RemoveContainer" containerID="918df772327123a8e2cb96b5cb654b337dc3d93eb6d5c0340e8bb98f9cac67f3" Nov 24 12:49:11 crc kubenswrapper[4782]: I1124 12:49:11.132560 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/crc-debug-j57lv" Nov 24 12:49:11 crc kubenswrapper[4782]: I1124 12:49:11.501537 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" path="/var/lib/kubelet/pods/c3be36ee-1f0f-4fb4-88ba-ea564acb28c3/volumes" Nov 24 12:49:26 crc kubenswrapper[4782]: I1124 12:49:26.609864 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5584bf45bd-6fhhg_de56c6c9-b982-419d-be5c-97f1f9379747/barbican-api/0.log" Nov 24 12:49:26 crc kubenswrapper[4782]: I1124 12:49:26.866946 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5584bf45bd-6fhhg_de56c6c9-b982-419d-be5c-97f1f9379747/barbican-api-log/0.log" Nov 24 12:49:26 crc kubenswrapper[4782]: I1124 12:49:26.938725 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-567dd88794-rs7lm_4f2c93b3-0f72-4e4e-bc85-c719e2e9954b/barbican-keystone-listener/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.122425 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8668478d95-lb5cp_b310f8bf-62fa-4955-984a-1df40c4e3a38/barbican-worker/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.183436 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-567dd88794-rs7lm_4f2c93b3-0f72-4e4e-bc85-c719e2e9954b/barbican-keystone-listener-log/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.218703 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8668478d95-lb5cp_b310f8bf-62fa-4955-984a-1df40c4e3a38/barbican-worker-log/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.407853 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw_672fd75c-f2f7-4396-a11e-e4e5abf8ab13/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.546184 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/ceilometer-central-agent/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.628095 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/proxy-httpd/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.656522 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/ceilometer-notification-agent/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.718548 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/sg-core/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.950539 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_25280233-1f0e-44f9-80ce-48d3d2413861/cinder-api/0.log" Nov 24 12:49:27 crc kubenswrapper[4782]: I1124 12:49:27.973920 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_25280233-1f0e-44f9-80ce-48d3d2413861/cinder-api-log/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.169493 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_611df7d1-ff5a-4747-b3ed-be19deedd3c6/cinder-scheduler/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.241130 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_611df7d1-ff5a-4747-b3ed-be19deedd3c6/probe/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.326834 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt_521af29a-8b28-4633-adc5-857ca14e0312/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.530660 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s4tln_58220605-30a9-4d4f-b785-3e9edabcfb5c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.609818 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/init/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.891714 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ccf95_44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:28 crc kubenswrapper[4782]: I1124 12:49:28.946404 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/init/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.006323 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/dnsmasq-dns/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.239919 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c5c3127c-bed5-4d35-b535-fc6ca3f79e86/glance-httpd/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.263766 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c5c3127c-bed5-4d35-b535-fc6ca3f79e86/glance-log/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.518478 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_533bc3bf-a4ed-4133-b448-9888eeea6416/glance-httpd/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.559135 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_533bc3bf-a4ed-4133-b448-9888eeea6416/glance-log/0.log" Nov 24 12:49:29 crc kubenswrapper[4782]: I1124 12:49:29.959942 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon/2.log" Nov 24 12:49:30 crc kubenswrapper[4782]: I1124 12:49:30.126202 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon/1.log" Nov 24 12:49:30 crc kubenswrapper[4782]: I1124 12:49:30.333763 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-w9859_e7525d3d-3415-44de-a76a-e6de73a7dc1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:30 crc kubenswrapper[4782]: I1124 12:49:30.355134 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon-log/0.log" Nov 24 12:49:30 crc kubenswrapper[4782]: I1124 12:49:30.690118 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w7gqb_ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:30 crc kubenswrapper[4782]: I1124 12:49:30.999632 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54579c9c49-nkmgh_4b6ef93c-ca86-4207-8cba-0cd8bc486889/keystone-api/0.log" Nov 24 12:49:31 crc kubenswrapper[4782]: I1124 12:49:31.017827 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6f389ec5-41d8-4afb-9df2-792618e38c30/kube-state-metrics/0.log" Nov 24 12:49:31 crc kubenswrapper[4782]: I1124 12:49:31.221197 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk_1af97733-205a-4fc3-804c-77517c7053db/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:31 crc kubenswrapper[4782]: I1124 12:49:31.780091 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-665949bbb5-7lm9x_6046c36e-6c5a-49e4-850b-d15d227c7851/neutron-httpd/0.log" Nov 24 12:49:31 crc kubenswrapper[4782]: I1124 12:49:31.811211 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-665949bbb5-7lm9x_6046c36e-6c5a-49e4-850b-d15d227c7851/neutron-api/0.log" Nov 24 12:49:32 crc kubenswrapper[4782]: I1124 12:49:32.255533 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46_891636a5-0fde-4436-b3ab-7831d7420439/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:32 crc kubenswrapper[4782]: I1124 12:49:32.480846 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39c39c96-99d5-4e76-9c99-20d1310fe1ac/nova-api-log/0.log" Nov 24 12:49:32 crc kubenswrapper[4782]: I1124 12:49:32.541522 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39c39c96-99d5-4e76-9c99-20d1310fe1ac/nova-api-api/0.log" Nov 24 12:49:32 crc kubenswrapper[4782]: I1124 12:49:32.684540 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_79cba7f4-7d61-489a-9c67-41a7a0dc1c28/nova-cell0-conductor-conductor/0.log" Nov 24 12:49:32 crc kubenswrapper[4782]: I1124 12:49:32.987515 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a1d44b32-67f5-4294-962e-e4c2821714f0/nova-cell1-conductor-conductor/0.log" Nov 24 12:49:33 crc kubenswrapper[4782]: I1124 12:49:33.392849 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4154f325-2ba9-4e67-a59e-d5e71d9f8cd8/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:49:33 crc kubenswrapper[4782]: I1124 12:49:33.644397 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5vt47_6cd5f290-1276-4bbf-a7c0-9075e776dd0b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:33 crc kubenswrapper[4782]: I1124 12:49:33.863578 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ef6b2c28-7003-45fe-922e-40b6f5c2a43a/nova-metadata-log/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.156936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c234628b-dc63-4176-b7d6-5506de5cd15b/nova-scheduler-scheduler/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.184247 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/mysql-bootstrap/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.519124 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/mysql-bootstrap/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.622181 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/galera/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.824565 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ef6b2c28-7003-45fe-922e-40b6f5c2a43a/nova-metadata-metadata/0.log" Nov 24 12:49:34 crc kubenswrapper[4782]: I1124 12:49:34.918099 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/mysql-bootstrap/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.140833 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/galera/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.231102 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c7c7aa63-55ae-4525-a262-c5c9d08e4fe7/openstackclient/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.237525 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/mysql-bootstrap/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.637233 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-m6c9b_a62553ed-d73b-49c8-be06-e9ad0542d8da/ovn-controller/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.710482 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hjt97_95804fb9-455c-4226-acb2-97418cd75b7e/openstack-network-exporter/0.log" Nov 24 12:49:35 crc kubenswrapper[4782]: I1124 12:49:35.922686 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server-init/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.111887 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.121900 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server-init/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.171584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovs-vswitchd/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.418345 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kc6qz_b30e01d5-eac0-49f4-88f5-bf4b5758bf1d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.499812 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0/openstack-network-exporter/0.log" Nov 24 12:49:36 crc kubenswrapper[4782]: I1124 12:49:36.653158 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0/ovn-northd/0.log" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.095027 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75e93622-05f8-4afc-868b-0a6f157fa62b/ovsdbserver-nb/0.log" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.097624 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75e93622-05f8-4afc-868b-0a6f157fa62b/openstack-network-exporter/0.log" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.139803 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:37 crc kubenswrapper[4782]: E1124 12:49:37.140273 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="registry-server" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.140288 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="registry-server" Nov 24 12:49:37 crc kubenswrapper[4782]: E1124 12:49:37.140321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="extract-content" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.140330 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="extract-content" Nov 24 12:49:37 crc kubenswrapper[4782]: E1124 12:49:37.140347 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="extract-utilities" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.140355 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="extract-utilities" Nov 24 12:49:37 crc kubenswrapper[4782]: E1124 12:49:37.140387 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" containerName="container-00" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.140395 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" containerName="container-00" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.159211 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0005f2e7-f850-4e15-80ab-136a218e9880" containerName="registry-server" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.159252 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3be36ee-1f0f-4fb4-88ba-ea564acb28c3" containerName="container-00" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.167666 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.170230 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.333836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.334162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2n7z\" (UniqueName: \"kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.334212 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.421799 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3/openstack-network-exporter/0.log" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.435547 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.435617 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.435644 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2n7z\" (UniqueName: \"kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.436220 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.436567 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.463966 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2n7z\" (UniqueName: \"kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z\") pod \"redhat-marketplace-8lmvj\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.500129 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:37 crc kubenswrapper[4782]: I1124 12:49:37.759735 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3/ovsdbserver-sb/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.048359 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548457c99b-pdf6j_b571494b-eadd-44e4-b7cd-122dbbaddef5/placement-api/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.072237 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548457c99b-pdf6j_b571494b-eadd-44e4-b7cd-122dbbaddef5/placement-log/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.153911 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.290049 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/setup-container/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.409338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerStarted","Data":"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d"} Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.409394 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerStarted","Data":"e08dfb18542e0d50eb74712217cf7fe1528a3158a83a97b7fc7ae780ffc4b977"} Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.671401 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/setup-container/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.706252 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/rabbitmq/0.log" Nov 24 12:49:38 crc kubenswrapper[4782]: I1124 12:49:38.780429 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/setup-container/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.048923 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/rabbitmq/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.076729 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk_f27d1d98-ecfa-4977-aa6c-abf87b9e244a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.085936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/setup-container/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.364435 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d2g2l_0b6970e9-155c-4b80-98ee-9305e8b942f2/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.423514 4782 generic.go:334] "Generic (PLEG): container finished" podID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerID="3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d" exitCode=0 Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.423840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerDied","Data":"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d"} Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.503293 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24_aa0cf12a-8750-4351-a6a7-e66bf1bb074c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.755504 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6c48d_b11f38fd-d0b3-4272-8c87-921c1d40b832/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:39 crc kubenswrapper[4782]: I1124 12:49:39.852155 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6bsqg_c8f27e6b-2964-4a8b-b976-92fb6421705a/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:49:40 crc kubenswrapper[4782]: I1124 12:49:40.198998 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-556bd89d59-52m2m_a2fa4f6f-fc43-4b5c-af94-0534b54364d7/proxy-httpd/0.log" Nov 24 12:49:40 crc kubenswrapper[4782]: I1124 12:49:40.275338 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-556bd89d59-52m2m_a2fa4f6f-fc43-4b5c-af94-0534b54364d7/proxy-server/0.log" Nov 24 12:49:40 crc kubenswrapper[4782]: I1124 12:49:40.765427 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dnt8l_8b31b3d1-1239-45a8-9380-693d4ce10324/swift-ring-rebalance/0.log" Nov 24 12:49:40 crc kubenswrapper[4782]: I1124 12:49:40.917587 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-auditor/0.log" Nov 24 12:49:40 crc kubenswrapper[4782]: I1124 12:49:40.928689 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-reaper/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.121773 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-replicator/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.165633 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-server/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.226400 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-auditor/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.300713 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-replicator/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.362626 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-server/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.420654 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-updater/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.507607 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerStarted","Data":"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186"} Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.521481 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-auditor/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.586064 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-expirer/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.748471 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-replicator/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.783051 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-server/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.794797 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-updater/0.log" Nov 24 12:49:41 crc kubenswrapper[4782]: I1124 12:49:41.906205 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/rsync/0.log" Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.085038 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/swift-recon-cron/0.log" Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.217429 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-snkfr_c9db5a23-263f-41cc-a1b6-b90df79aa8d2/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.342171 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bf2749fb-4ae8-43f8-847e-3d4528d4556a/tempest-tests-tempest-tests-runner/0.log" Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.483893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2cac3222-c76a-4e73-8333-38f146cec5c9/test-operator-logs-container/0.log" Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.513570 4782 generic.go:334] "Generic (PLEG): container finished" podID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerID="0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186" exitCode=0 Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.513604 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerDied","Data":"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186"} Nov 24 12:49:42 crc kubenswrapper[4782]: I1124 12:49:42.663396 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp_c153f0a7-9375-40ea-9d60-aad9c960a30a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:49:43 crc kubenswrapper[4782]: I1124 12:49:43.525318 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerStarted","Data":"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3"} Nov 24 12:49:43 crc kubenswrapper[4782]: I1124 12:49:43.574999 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8lmvj" podStartSLOduration=2.875015359 podStartE2EDuration="6.574871569s" podCreationTimestamp="2025-11-24 12:49:37 +0000 UTC" firstStartedPulling="2025-11-24 12:49:39.44226923 +0000 UTC m=+3228.686102999" lastFinishedPulling="2025-11-24 12:49:43.14212543 +0000 UTC m=+3232.385959209" observedRunningTime="2025-11-24 12:49:43.561564988 +0000 UTC m=+3232.805398757" watchObservedRunningTime="2025-11-24 12:49:43.574871569 +0000 UTC m=+3232.818705358" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.510719 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.513024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.524729 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.593818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.593916 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.593950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckcv\" (UniqueName: \"kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.696321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckcv\" (UniqueName: \"kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.696548 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.696629 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.697169 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.697895 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.744263 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckcv\" (UniqueName: \"kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv\") pod \"redhat-operators-5rvq4\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:45 crc kubenswrapper[4782]: I1124 12:49:45.857146 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:46 crc kubenswrapper[4782]: I1124 12:49:46.503312 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:49:46 crc kubenswrapper[4782]: I1124 12:49:46.638989 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerStarted","Data":"e7ad0df7647fd84f73876ed206214779c3863f4d6cd6d0dfed57cbead453ef87"} Nov 24 12:49:47 crc kubenswrapper[4782]: I1124 12:49:47.501529 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:47 crc kubenswrapper[4782]: I1124 12:49:47.502067 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:47 crc kubenswrapper[4782]: I1124 12:49:47.598762 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:47 crc kubenswrapper[4782]: I1124 12:49:47.664105 4782 generic.go:334] "Generic (PLEG): container finished" podID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerID="dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b" exitCode=0 Nov 24 12:49:47 crc kubenswrapper[4782]: I1124 12:49:47.666012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerDied","Data":"dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b"} Nov 24 12:49:48 crc kubenswrapper[4782]: I1124 12:49:48.287671 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_579fda47-7251-4722-b19c-eadbf6aaba21/memcached/0.log" Nov 24 12:49:48 crc kubenswrapper[4782]: I1124 12:49:48.676340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerStarted","Data":"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee"} Nov 24 12:49:54 crc kubenswrapper[4782]: I1124 12:49:54.726655 4782 generic.go:334] "Generic (PLEG): container finished" podID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerID="5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee" exitCode=0 Nov 24 12:49:54 crc kubenswrapper[4782]: I1124 12:49:54.726858 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerDied","Data":"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee"} Nov 24 12:49:55 crc kubenswrapper[4782]: I1124 12:49:55.740022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerStarted","Data":"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b"} Nov 24 12:49:55 crc kubenswrapper[4782]: I1124 12:49:55.858137 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:55 crc kubenswrapper[4782]: I1124 12:49:55.858187 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:49:56 crc kubenswrapper[4782]: I1124 12:49:56.911176 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rvq4" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" probeResult="failure" output=< Nov 24 12:49:56 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:49:56 crc kubenswrapper[4782]: > Nov 24 12:49:57 crc kubenswrapper[4782]: I1124 12:49:57.561771 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:57 crc kubenswrapper[4782]: I1124 12:49:57.593816 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5rvq4" podStartSLOduration=5.029305632 podStartE2EDuration="12.59379724s" podCreationTimestamp="2025-11-24 12:49:45 +0000 UTC" firstStartedPulling="2025-11-24 12:49:47.668025866 +0000 UTC m=+3236.911859635" lastFinishedPulling="2025-11-24 12:49:55.232517474 +0000 UTC m=+3244.476351243" observedRunningTime="2025-11-24 12:49:55.766878449 +0000 UTC m=+3245.010712218" watchObservedRunningTime="2025-11-24 12:49:57.59379724 +0000 UTC m=+3246.837631009" Nov 24 12:49:57 crc kubenswrapper[4782]: I1124 12:49:57.623151 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:57 crc kubenswrapper[4782]: I1124 12:49:57.757214 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8lmvj" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="registry-server" containerID="cri-o://5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3" gracePeriod=2 Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.281461 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.419862 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities\") pod \"ee40841d-7a28-4741-9e41-d0df56dd430e\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.419996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2n7z\" (UniqueName: \"kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z\") pod \"ee40841d-7a28-4741-9e41-d0df56dd430e\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.420121 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content\") pod \"ee40841d-7a28-4741-9e41-d0df56dd430e\" (UID: \"ee40841d-7a28-4741-9e41-d0df56dd430e\") " Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.420584 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities" (OuterVolumeSpecName: "utilities") pod "ee40841d-7a28-4741-9e41-d0df56dd430e" (UID: "ee40841d-7a28-4741-9e41-d0df56dd430e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.438857 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee40841d-7a28-4741-9e41-d0df56dd430e" (UID: "ee40841d-7a28-4741-9e41-d0df56dd430e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.439304 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z" (OuterVolumeSpecName: "kube-api-access-q2n7z") pod "ee40841d-7a28-4741-9e41-d0df56dd430e" (UID: "ee40841d-7a28-4741-9e41-d0df56dd430e"). InnerVolumeSpecName "kube-api-access-q2n7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.523789 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.523831 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee40841d-7a28-4741-9e41-d0df56dd430e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.523843 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2n7z\" (UniqueName: \"kubernetes.io/projected/ee40841d-7a28-4741-9e41-d0df56dd430e-kube-api-access-q2n7z\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.768715 4782 generic.go:334] "Generic (PLEG): container finished" podID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerID="5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3" exitCode=0 Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.768771 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lmvj" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.768788 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerDied","Data":"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3"} Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.769172 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lmvj" event={"ID":"ee40841d-7a28-4741-9e41-d0df56dd430e","Type":"ContainerDied","Data":"e08dfb18542e0d50eb74712217cf7fe1528a3158a83a97b7fc7ae780ffc4b977"} Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.769191 4782 scope.go:117] "RemoveContainer" containerID="5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.797244 4782 scope.go:117] "RemoveContainer" containerID="0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.821727 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.822611 4782 scope.go:117] "RemoveContainer" containerID="3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.842007 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lmvj"] Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.868216 4782 scope.go:117] "RemoveContainer" containerID="5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3" Nov 24 12:49:58 crc kubenswrapper[4782]: E1124 12:49:58.869143 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3\": container with ID starting with 5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3 not found: ID does not exist" containerID="5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.869265 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3"} err="failed to get container status \"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3\": rpc error: code = NotFound desc = could not find container \"5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3\": container with ID starting with 5f725d437c61d058b7240492ed0eb420f51f052046cfb5466afde9d12bc898e3 not found: ID does not exist" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.869359 4782 scope.go:117] "RemoveContainer" containerID="0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186" Nov 24 12:49:58 crc kubenswrapper[4782]: E1124 12:49:58.871145 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186\": container with ID starting with 0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186 not found: ID does not exist" containerID="0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.871190 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186"} err="failed to get container status \"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186\": rpc error: code = NotFound desc = could not find container \"0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186\": container with ID starting with 0692a7d772c9653190508cb37cbbc0e19ec5bfbbb9fe5ac5a2ddc03eb10b7186 not found: ID does not exist" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.871220 4782 scope.go:117] "RemoveContainer" containerID="3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d" Nov 24 12:49:58 crc kubenswrapper[4782]: E1124 12:49:58.871578 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d\": container with ID starting with 3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d not found: ID does not exist" containerID="3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d" Nov 24 12:49:58 crc kubenswrapper[4782]: I1124 12:49:58.871619 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d"} err="failed to get container status \"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d\": rpc error: code = NotFound desc = could not find container \"3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d\": container with ID starting with 3688445075c10fb6204e7b8f80f4760d06b8ed539411c8ff3152d5f7b83e491d not found: ID does not exist" Nov 24 12:49:59 crc kubenswrapper[4782]: I1124 12:49:59.501245 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" path="/var/lib/kubelet/pods/ee40841d-7a28-4741-9e41-d0df56dd430e/volumes" Nov 24 12:50:06 crc kubenswrapper[4782]: I1124 12:50:06.905741 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rvq4" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" probeResult="failure" output=< Nov 24 12:50:06 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:50:06 crc kubenswrapper[4782]: > Nov 24 12:50:14 crc kubenswrapper[4782]: I1124 12:50:14.610978 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-kr5jc_9adec34d-0e3d-4f65-80b2-4ba1c0731be4/kube-rbac-proxy/0.log" Nov 24 12:50:14 crc kubenswrapper[4782]: I1124 12:50:14.707549 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-kr5jc_9adec34d-0e3d-4f65-80b2-4ba1c0731be4/manager/0.log" Nov 24 12:50:14 crc kubenswrapper[4782]: I1124 12:50:14.859131 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k5z2n_45393058-140b-48ea-9691-9bbe0740342b/kube-rbac-proxy/0.log" Nov 24 12:50:14 crc kubenswrapper[4782]: I1124 12:50:14.993666 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k5z2n_45393058-140b-48ea-9691-9bbe0740342b/manager/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.225065 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.329362 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.372296 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.382662 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.599051 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.621016 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.678574 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/extract/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.820431 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-czqnv_5ace8cad-a0d4-4ba1-99f8-a097edd76a74/kube-rbac-proxy/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.913305 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-czqnv_5ace8cad-a0d4-4ba1-99f8-a097edd76a74/manager/0.log" Nov 24 12:50:15 crc kubenswrapper[4782]: I1124 12:50:15.981224 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-kmhd6_ba20509b-c083-42f1-bf39-be2ed4a463f7/kube-rbac-proxy/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.205129 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-kmhd6_ba20509b-c083-42f1-bf39-be2ed4a463f7/manager/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.228228 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wmsss_2ae11a51-1628-454f-8b78-77e9aaa2691b/kube-rbac-proxy/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.265538 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wmsss_2ae11a51-1628-454f-8b78-77e9aaa2691b/manager/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.401978 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-nqd5j_e6982d2e-f7d3-4374-bc66-7949d3bcc062/kube-rbac-proxy/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.480726 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-nqd5j_e6982d2e-f7d3-4374-bc66-7949d3bcc062/manager/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.680114 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fjggr_61b6c96b-b73c-47b5-8e05-988870f4587f/kube-rbac-proxy/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.854487 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-whlt5_6b6efe11-117c-42a3-baa5-b43b07557e43/kube-rbac-proxy/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.916823 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fjggr_61b6c96b-b73c-47b5-8e05-988870f4587f/manager/0.log" Nov 24 12:50:16 crc kubenswrapper[4782]: I1124 12:50:16.918826 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rvq4" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" probeResult="failure" output=< Nov 24 12:50:16 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:50:16 crc kubenswrapper[4782]: > Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.033993 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-whlt5_6b6efe11-117c-42a3-baa5-b43b07557e43/manager/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.145595 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b854ddf99-pb2wn_7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c/kube-rbac-proxy/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.212178 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b854ddf99-pb2wn_7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c/manager/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.403878 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-ctr6x_ef74c0aa-ac31-49b1-861d-258fe0a3ddff/manager/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.443302 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-ctr6x_ef74c0aa-ac31-49b1-861d-258fe0a3ddff/kube-rbac-proxy/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.618715 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-57dk2_34e6f50e-248f-4ef3-a145-83ccb7616d0d/kube-rbac-proxy/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.719694 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-57dk2_34e6f50e-248f-4ef3-a145-83ccb7616d0d/manager/0.log" Nov 24 12:50:17 crc kubenswrapper[4782]: I1124 12:50:17.886030 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tmp5g_9e50a599-1a70-46c9-94a1-d3148778888d/kube-rbac-proxy/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.013794 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tmp5g_9e50a599-1a70-46c9-94a1-d3148778888d/manager/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.132963 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2hr2j_ec2d6fc2-5418-4263-a351-0422b2d5068d/kube-rbac-proxy/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.240740 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2hr2j_ec2d6fc2-5418-4263-a351-0422b2d5068d/manager/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.331785 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-kh9k5_a09a3b55-484e-461d-9f95-1e3279b323c5/kube-rbac-proxy/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.421514 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-kh9k5_a09a3b55-484e-461d-9f95-1e3279b323c5/manager/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.578035 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-6jvns_ecf25e22-396d-4c6d-9585-566ffc0d0092/kube-rbac-proxy/0.log" Nov 24 12:50:18 crc kubenswrapper[4782]: I1124 12:50:18.664552 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-6jvns_ecf25e22-396d-4c6d-9585-566ffc0d0092/manager/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.109091 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bd4c479c8-db2zp_8aef8676-912f-4585-a5bb-a494867bf2e9/operator/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.188003 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qkfdv_f26694c9-e51b-4e17-b20c-eafbb8164ba8/registry-server/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.402917 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7pb8k_a9dcd8ef-dbbf-43dc-97a0-e77d942ff589/manager/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.531875 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7pb8k_a9dcd8ef-dbbf-43dc-97a0-e77d942ff589/kube-rbac-proxy/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.879570 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k6n5f_60a3fcad-0c5a-4be2-b89b-4d143d3a8e62/kube-rbac-proxy/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.932030 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-689b7ddfcc-9brt2_55628383-51b4-4c77-ac10-476769165984/manager/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.932926 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k6n5f_60a3fcad-0c5a-4be2-b89b-4d143d3a8e62/manager/0.log" Nov 24 12:50:19 crc kubenswrapper[4782]: I1124 12:50:19.972049 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4ltpx_f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a/operator/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.173228 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pdvh7_5f8b3ed3-fba7-4a0e-8245-f822c548082e/kube-rbac-proxy/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.209629 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-n6cqm_76bc751e-4645-4cb1-bdfe-7e3c6732505b/kube-rbac-proxy/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.239718 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pdvh7_5f8b3ed3-fba7-4a0e-8245-f822c548082e/manager/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.361781 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-n6cqm_76bc751e-4645-4cb1-bdfe-7e3c6732505b/manager/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.513739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8xf2r_a0f8d31c-392e-468d-9a86-b5a482dbc6fb/manager/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.528799 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8xf2r_a0f8d31c-392e-468d-9a86-b5a482dbc6fb/kube-rbac-proxy/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.660147 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-clxm4_19e8c85c-675d-433f-8346-878034f14d24/kube-rbac-proxy/0.log" Nov 24 12:50:20 crc kubenswrapper[4782]: I1124 12:50:20.728906 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-clxm4_19e8c85c-675d-433f-8346-878034f14d24/manager/0.log" Nov 24 12:50:26 crc kubenswrapper[4782]: I1124 12:50:26.915232 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rvq4" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" probeResult="failure" output=< Nov 24 12:50:26 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:50:26 crc kubenswrapper[4782]: > Nov 24 12:50:30 crc kubenswrapper[4782]: I1124 12:50:30.411227 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:50:30 crc kubenswrapper[4782]: I1124 12:50:30.411570 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:50:35 crc kubenswrapper[4782]: I1124 12:50:35.906164 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:50:35 crc kubenswrapper[4782]: I1124 12:50:35.958962 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:50:36 crc kubenswrapper[4782]: I1124 12:50:36.153383 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.104188 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5rvq4" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" containerID="cri-o://b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b" gracePeriod=2 Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.598924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.735940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities\") pod \"5e589762-23e0-48aa-8ed5-a82a045338bd\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.736245 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content\") pod \"5e589762-23e0-48aa-8ed5-a82a045338bd\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.736545 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities" (OuterVolumeSpecName: "utilities") pod "5e589762-23e0-48aa-8ed5-a82a045338bd" (UID: "5e589762-23e0-48aa-8ed5-a82a045338bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.736695 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tckcv\" (UniqueName: \"kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv\") pod \"5e589762-23e0-48aa-8ed5-a82a045338bd\" (UID: \"5e589762-23e0-48aa-8ed5-a82a045338bd\") " Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.737343 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.743653 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv" (OuterVolumeSpecName: "kube-api-access-tckcv") pod "5e589762-23e0-48aa-8ed5-a82a045338bd" (UID: "5e589762-23e0-48aa-8ed5-a82a045338bd"). InnerVolumeSpecName "kube-api-access-tckcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.840502 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tckcv\" (UniqueName: \"kubernetes.io/projected/5e589762-23e0-48aa-8ed5-a82a045338bd-kube-api-access-tckcv\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.874163 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e589762-23e0-48aa-8ed5-a82a045338bd" (UID: "5e589762-23e0-48aa-8ed5-a82a045338bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:50:37 crc kubenswrapper[4782]: I1124 12:50:37.942673 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e589762-23e0-48aa-8ed5-a82a045338bd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.114197 4782 generic.go:334] "Generic (PLEG): container finished" podID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerID="b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b" exitCode=0 Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.114258 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerDied","Data":"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b"} Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.114310 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rvq4" event={"ID":"5e589762-23e0-48aa-8ed5-a82a045338bd","Type":"ContainerDied","Data":"e7ad0df7647fd84f73876ed206214779c3863f4d6cd6d0dfed57cbead453ef87"} Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.114327 4782 scope.go:117] "RemoveContainer" containerID="b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.115379 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rvq4" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.123742 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-662wl_3d0cecf6-1037-494f-a783-682ba2b70960/control-plane-machine-set-operator/0.log" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.152212 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.158745 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5rvq4"] Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.177407 4782 scope.go:117] "RemoveContainer" containerID="5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.224971 4782 scope.go:117] "RemoveContainer" containerID="dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.258215 4782 scope.go:117] "RemoveContainer" containerID="b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b" Nov 24 12:50:38 crc kubenswrapper[4782]: E1124 12:50:38.258904 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b\": container with ID starting with b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b not found: ID does not exist" containerID="b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.258947 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b"} err="failed to get container status \"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b\": rpc error: code = NotFound desc = could not find container \"b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b\": container with ID starting with b8d3b879f26b444fbe536858c824adf4dcf24a80c5b920da02533582f594b66b not found: ID does not exist" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.258973 4782 scope.go:117] "RemoveContainer" containerID="5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee" Nov 24 12:50:38 crc kubenswrapper[4782]: E1124 12:50:38.259635 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee\": container with ID starting with 5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee not found: ID does not exist" containerID="5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.259656 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee"} err="failed to get container status \"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee\": rpc error: code = NotFound desc = could not find container \"5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee\": container with ID starting with 5de65124e6a8ac991ab34ce45ee8dd15faa37f39bcd6fbf58925008a7fd35fee not found: ID does not exist" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.259671 4782 scope.go:117] "RemoveContainer" containerID="dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b" Nov 24 12:50:38 crc kubenswrapper[4782]: E1124 12:50:38.260170 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b\": container with ID starting with dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b not found: ID does not exist" containerID="dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.260190 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b"} err="failed to get container status \"dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b\": rpc error: code = NotFound desc = could not find container \"dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b\": container with ID starting with dc7f71091aeeef0eedb6a5c790aa095858a5b3071b3ae6ec50e028a35bbab38b not found: ID does not exist" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.396443 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bhx6w_9babc041-e14e-4226-aebc-50e771089c3c/kube-rbac-proxy/0.log" Nov 24 12:50:38 crc kubenswrapper[4782]: I1124 12:50:38.494277 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bhx6w_9babc041-e14e-4226-aebc-50e771089c3c/machine-api-operator/0.log" Nov 24 12:50:39 crc kubenswrapper[4782]: I1124 12:50:39.502178 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" path="/var/lib/kubelet/pods/5e589762-23e0-48aa-8ed5-a82a045338bd/volumes" Nov 24 12:50:50 crc kubenswrapper[4782]: I1124 12:50:50.233994 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-4dr44_5598822c-dc55-41dd-bb17-7657376575e7/cert-manager-controller/0.log" Nov 24 12:50:50 crc kubenswrapper[4782]: I1124 12:50:50.323112 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-tmfht_3df46084-4a7d-46f9-9b83-0980a55f1752/cert-manager-cainjector/0.log" Nov 24 12:50:50 crc kubenswrapper[4782]: I1124 12:50:50.425397 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2k9fj_4070eb87-d044-4a58-8a71-1a9a53cc0ad2/cert-manager-webhook/0.log" Nov 24 12:51:00 crc kubenswrapper[4782]: I1124 12:51:00.411087 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:51:00 crc kubenswrapper[4782]: I1124 12:51:00.411675 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.352122 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-w8qrk_8e70ec59-8a74-4f10-bddd-f30177d331f4/nmstate-console-plugin/0.log" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.498954 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2fms8_ec671193-a1fa-4295-8ac6-6f2df89a3687/nmstate-handler/0.log" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.645588 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zmbn9_7b438b30-337c-4f13-8973-2a170ccb7a2a/kube-rbac-proxy/0.log" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.719281 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zmbn9_7b438b30-337c-4f13-8973-2a170ccb7a2a/nmstate-metrics/0.log" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.767963 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-b4h4d_e2997dd2-a58c-48d1-b003-5e90a0df8a2d/nmstate-operator/0.log" Nov 24 12:51:02 crc kubenswrapper[4782]: I1124 12:51:02.927062 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-swb2c_93ae4c19-bf24-48ea-96db-36a5bdd72d01/nmstate-webhook/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.139997 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7tjql_9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95/kube-rbac-proxy/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.241345 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7tjql_9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95/controller/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.356150 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.543442 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.585194 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.608362 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.752202 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.837530 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.872301 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.896346 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:51:17 crc kubenswrapper[4782]: I1124 12:51:17.983952 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.130112 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.180977 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.193387 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.197300 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/controller/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.414198 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/kube-rbac-proxy-frr/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.459751 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/frr-metrics/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.481817 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/kube-rbac-proxy/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.661314 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/reloader/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.738636 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-krmz8_d4164755-f714-472a-9c05-c9978612bce6/frr-k8s-webhook-server/0.log" Nov 24 12:51:18 crc kubenswrapper[4782]: I1124 12:51:18.890786 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cf8447d56-ls4f7_e57d9da0-c929-401e-9311-7c2caa53e702/manager/0.log" Nov 24 12:51:19 crc kubenswrapper[4782]: I1124 12:51:19.171432 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8688c9769b-zmxnc_a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661/webhook-server/0.log" Nov 24 12:51:19 crc kubenswrapper[4782]: I1124 12:51:19.406030 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/frr/0.log" Nov 24 12:51:19 crc kubenswrapper[4782]: I1124 12:51:19.606728 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xrpzs_cbc9434a-55e0-497d-9658-7531208c412e/kube-rbac-proxy/0.log" Nov 24 12:51:19 crc kubenswrapper[4782]: I1124 12:51:19.796431 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xrpzs_cbc9434a-55e0-497d-9658-7531208c412e/speaker/0.log" Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.410948 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.411519 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.411566 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.412306 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.412364 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c" gracePeriod=600 Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.574671 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c" exitCode=0 Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.574736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c"} Nov 24 12:51:30 crc kubenswrapper[4782]: I1124 12:51:30.574815 4782 scope.go:117] "RemoveContainer" containerID="9cc5391f1ea58f41a901001a8fcc8dc2d51e31876c7d4da59cb05d95f5741018" Nov 24 12:51:31 crc kubenswrapper[4782]: I1124 12:51:31.583995 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c"} Nov 24 12:51:32 crc kubenswrapper[4782]: I1124 12:51:32.513822 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:51:32 crc kubenswrapper[4782]: I1124 12:51:32.759301 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:51:32 crc kubenswrapper[4782]: I1124 12:51:32.836326 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:51:32 crc kubenswrapper[4782]: I1124 12:51:32.899096 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.013793 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.021341 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.085619 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/extract/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.199620 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.421739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.455766 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.488468 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.674483 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.675935 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:51:33 crc kubenswrapper[4782]: I1124 12:51:33.967589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-utilities/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.039307 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/registry-server/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.190883 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-utilities/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.231522 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-content/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.248708 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-content/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.391777 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-utilities/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.395623 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/extract-content/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.674013 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.984021 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:51:34 crc kubenswrapper[4782]: I1124 12:51:34.989981 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kgk8_b28b07c9-871a-416e-8eb0-7ada07825bac/registry-server/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.015493 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.024646 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.207204 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.214268 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/extract/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.241786 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.443119 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gzb6p_b5fb7f2d-5841-44a3-a7cc-41b44c66cd73/marketplace-operator/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.506687 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.722157 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.737734 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777343 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5wvq"] Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777780 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="extract-content" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777796 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="extract-content" Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777812 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="extract-utilities" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777819 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="extract-utilities" Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777832 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="extract-content" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777838 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="extract-content" Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777852 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777857 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777871 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777878 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: E1124 12:51:35.777898 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="extract-utilities" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.777905 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="extract-utilities" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.778077 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee40841d-7a28-4741-9e41-d0df56dd430e" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.778096 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e589762-23e0-48aa-8ed5-a82a045338bd" containerName="registry-server" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.779459 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.783293 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.793669 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5wvq"] Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.852254 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-utilities\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.852319 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mq4\" (UniqueName: \"kubernetes.io/projected/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-kube-api-access-74mq4\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.852354 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-catalog-content\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.954079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-utilities\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.954134 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74mq4\" (UniqueName: \"kubernetes.io/projected/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-kube-api-access-74mq4\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.954159 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-catalog-content\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.954598 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-catalog-content\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.954772 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-utilities\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:35 crc kubenswrapper[4782]: I1124 12:51:35.973177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mq4\" (UniqueName: \"kubernetes.io/projected/b0d5387e-15d2-42c7-9717-0e2e3e30ee09-kube-api-access-74mq4\") pod \"community-operators-d5wvq\" (UID: \"b0d5387e-15d2-42c7-9717-0e2e3e30ee09\") " pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.061217 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.090587 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.100216 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.158141 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/registry-server/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.357860 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.579792 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.583228 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5wvq"] Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.615601 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.637564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5wvq" event={"ID":"b0d5387e-15d2-42c7-9717-0e2e3e30ee09","Type":"ContainerStarted","Data":"b3ea79cc822a92ee9905963d6bc4a8acc4406263ef6dc1024500165672b50b03"} Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.639045 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.846649 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:51:36 crc kubenswrapper[4782]: I1124 12:51:36.879799 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:51:37 crc kubenswrapper[4782]: I1124 12:51:37.387725 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/registry-server/0.log" Nov 24 12:51:37 crc kubenswrapper[4782]: I1124 12:51:37.653802 4782 generic.go:334] "Generic (PLEG): container finished" podID="b0d5387e-15d2-42c7-9717-0e2e3e30ee09" containerID="cba78a32db9c8a97e0e7ac8c641144adf7797bbf105b99e4fff6bc45b23e10de" exitCode=0 Nov 24 12:51:37 crc kubenswrapper[4782]: I1124 12:51:37.654052 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5wvq" event={"ID":"b0d5387e-15d2-42c7-9717-0e2e3e30ee09","Type":"ContainerDied","Data":"cba78a32db9c8a97e0e7ac8c641144adf7797bbf105b99e4fff6bc45b23e10de"} Nov 24 12:51:43 crc kubenswrapper[4782]: I1124 12:51:43.708317 4782 generic.go:334] "Generic (PLEG): container finished" podID="b0d5387e-15d2-42c7-9717-0e2e3e30ee09" containerID="d6f5a45e452d4ea07a61391b0f850c7fb522f8290165be5df4dec647d10bfb39" exitCode=0 Nov 24 12:51:43 crc kubenswrapper[4782]: I1124 12:51:43.708361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5wvq" event={"ID":"b0d5387e-15d2-42c7-9717-0e2e3e30ee09","Type":"ContainerDied","Data":"d6f5a45e452d4ea07a61391b0f850c7fb522f8290165be5df4dec647d10bfb39"} Nov 24 12:51:45 crc kubenswrapper[4782]: I1124 12:51:45.726434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5wvq" event={"ID":"b0d5387e-15d2-42c7-9717-0e2e3e30ee09","Type":"ContainerStarted","Data":"713a1583b85017e60f048ebf09718e2c14e3ce1a9dcf8d4c78c6cb59f7a822e0"} Nov 24 12:51:45 crc kubenswrapper[4782]: I1124 12:51:45.753540 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5wvq" podStartSLOduration=3.540174585 podStartE2EDuration="10.753516786s" podCreationTimestamp="2025-11-24 12:51:35 +0000 UTC" firstStartedPulling="2025-11-24 12:51:37.657284979 +0000 UTC m=+3346.901118748" lastFinishedPulling="2025-11-24 12:51:44.87062718 +0000 UTC m=+3354.114460949" observedRunningTime="2025-11-24 12:51:45.743410707 +0000 UTC m=+3354.987244476" watchObservedRunningTime="2025-11-24 12:51:45.753516786 +0000 UTC m=+3354.997350565" Nov 24 12:51:46 crc kubenswrapper[4782]: I1124 12:51:46.100790 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:46 crc kubenswrapper[4782]: I1124 12:51:46.100875 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:47 crc kubenswrapper[4782]: I1124 12:51:47.154206 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d5wvq" podUID="b0d5387e-15d2-42c7-9717-0e2e3e30ee09" containerName="registry-server" probeResult="failure" output=< Nov 24 12:51:47 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 12:51:47 crc kubenswrapper[4782]: > Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.151284 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.220810 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5wvq" Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.374973 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5wvq"] Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.457919 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.458929 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8kgk8" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="registry-server" containerID="cri-o://fc6b43878c5c68c368ce8f7b965c92c0b232bc6fe8a4169e4ecc342b65fd0136" gracePeriod=2 Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.847069 4782 generic.go:334] "Generic (PLEG): container finished" podID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerID="fc6b43878c5c68c368ce8f7b965c92c0b232bc6fe8a4169e4ecc342b65fd0136" exitCode=0 Nov 24 12:51:56 crc kubenswrapper[4782]: I1124 12:51:56.847247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerDied","Data":"fc6b43878c5c68c368ce8f7b965c92c0b232bc6fe8a4169e4ecc342b65fd0136"} Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.133027 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.213570 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dklbl\" (UniqueName: \"kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl\") pod \"b28b07c9-871a-416e-8eb0-7ada07825bac\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.213699 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities\") pod \"b28b07c9-871a-416e-8eb0-7ada07825bac\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.214101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content\") pod \"b28b07c9-871a-416e-8eb0-7ada07825bac\" (UID: \"b28b07c9-871a-416e-8eb0-7ada07825bac\") " Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.219452 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities" (OuterVolumeSpecName: "utilities") pod "b28b07c9-871a-416e-8eb0-7ada07825bac" (UID: "b28b07c9-871a-416e-8eb0-7ada07825bac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.229774 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl" (OuterVolumeSpecName: "kube-api-access-dklbl") pod "b28b07c9-871a-416e-8eb0-7ada07825bac" (UID: "b28b07c9-871a-416e-8eb0-7ada07825bac"). InnerVolumeSpecName "kube-api-access-dklbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.316365 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dklbl\" (UniqueName: \"kubernetes.io/projected/b28b07c9-871a-416e-8eb0-7ada07825bac-kube-api-access-dklbl\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.316419 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.339884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b28b07c9-871a-416e-8eb0-7ada07825bac" (UID: "b28b07c9-871a-416e-8eb0-7ada07825bac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.418411 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28b07c9-871a-416e-8eb0-7ada07825bac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.858103 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kgk8" event={"ID":"b28b07c9-871a-416e-8eb0-7ada07825bac","Type":"ContainerDied","Data":"7973d0a92044ed006b40d7aae750f637f17d0c53b8316cb558389f6392461e5d"} Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.858165 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kgk8" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.858482 4782 scope.go:117] "RemoveContainer" containerID="fc6b43878c5c68c368ce8f7b965c92c0b232bc6fe8a4169e4ecc342b65fd0136" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.883796 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.887093 4782 scope.go:117] "RemoveContainer" containerID="7e97f83a2715df3284705d6cc51b9d0ff852beaa079033a6f1b8a294e23c7fc4" Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.893638 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8kgk8"] Nov 24 12:51:57 crc kubenswrapper[4782]: I1124 12:51:57.913560 4782 scope.go:117] "RemoveContainer" containerID="e07a0a03f5c187d6dee1dc13375c351e27bd7852841f912867c2d5947780422a" Nov 24 12:51:59 crc kubenswrapper[4782]: I1124 12:51:59.501784 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" path="/var/lib/kubelet/pods/b28b07c9-871a-416e-8eb0-7ada07825bac/volumes" Nov 24 12:53:30 crc kubenswrapper[4782]: I1124 12:53:30.411064 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:53:30 crc kubenswrapper[4782]: I1124 12:53:30.411893 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:53:35 crc kubenswrapper[4782]: I1124 12:53:35.716000 4782 generic.go:334] "Generic (PLEG): container finished" podID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerID="3f7af836540877b1356f36ec29b8ba93d46cbeb89f1a2c0c768054d4bfa6feea" exitCode=0 Nov 24 12:53:35 crc kubenswrapper[4782]: I1124 12:53:35.716111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" event={"ID":"7a964d21-2595-47ee-ae52-2d0677bd25eb","Type":"ContainerDied","Data":"3f7af836540877b1356f36ec29b8ba93d46cbeb89f1a2c0c768054d4bfa6feea"} Nov 24 12:53:35 crc kubenswrapper[4782]: I1124 12:53:35.717215 4782 scope.go:117] "RemoveContainer" containerID="3f7af836540877b1356f36ec29b8ba93d46cbeb89f1a2c0c768054d4bfa6feea" Nov 24 12:53:36 crc kubenswrapper[4782]: I1124 12:53:36.051110 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kvrl6_must-gather-h6t7l_7a964d21-2595-47ee-ae52-2d0677bd25eb/gather/0.log" Nov 24 12:53:44 crc kubenswrapper[4782]: I1124 12:53:44.654329 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kvrl6/must-gather-h6t7l"] Nov 24 12:53:44 crc kubenswrapper[4782]: I1124 12:53:44.655244 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="copy" containerID="cri-o://6d71022feea65a0023180470a98ba8a9a2b8d64d405cd839ae57862457b63455" gracePeriod=2 Nov 24 12:53:44 crc kubenswrapper[4782]: I1124 12:53:44.661525 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kvrl6/must-gather-h6t7l"] Nov 24 12:53:44 crc kubenswrapper[4782]: I1124 12:53:44.847632 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kvrl6_must-gather-h6t7l_7a964d21-2595-47ee-ae52-2d0677bd25eb/copy/0.log" Nov 24 12:53:44 crc kubenswrapper[4782]: I1124 12:53:44.849716 4782 generic.go:334] "Generic (PLEG): container finished" podID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerID="6d71022feea65a0023180470a98ba8a9a2b8d64d405cd839ae57862457b63455" exitCode=143 Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.218844 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kvrl6_must-gather-h6t7l_7a964d21-2595-47ee-ae52-2d0677bd25eb/copy/0.log" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.219322 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.288647 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output\") pod \"7a964d21-2595-47ee-ae52-2d0677bd25eb\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.288860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcjrh\" (UniqueName: \"kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh\") pod \"7a964d21-2595-47ee-ae52-2d0677bd25eb\" (UID: \"7a964d21-2595-47ee-ae52-2d0677bd25eb\") " Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.294577 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh" (OuterVolumeSpecName: "kube-api-access-rcjrh") pod "7a964d21-2595-47ee-ae52-2d0677bd25eb" (UID: "7a964d21-2595-47ee-ae52-2d0677bd25eb"). InnerVolumeSpecName "kube-api-access-rcjrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.390874 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcjrh\" (UniqueName: \"kubernetes.io/projected/7a964d21-2595-47ee-ae52-2d0677bd25eb-kube-api-access-rcjrh\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.417799 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7a964d21-2595-47ee-ae52-2d0677bd25eb" (UID: "7a964d21-2595-47ee-ae52-2d0677bd25eb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.492521 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a964d21-2595-47ee-ae52-2d0677bd25eb-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.501586 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" path="/var/lib/kubelet/pods/7a964d21-2595-47ee-ae52-2d0677bd25eb/volumes" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.858814 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kvrl6_must-gather-h6t7l_7a964d21-2595-47ee-ae52-2d0677bd25eb/copy/0.log" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.859157 4782 scope.go:117] "RemoveContainer" containerID="6d71022feea65a0023180470a98ba8a9a2b8d64d405cd839ae57862457b63455" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.859210 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kvrl6/must-gather-h6t7l" Nov 24 12:53:45 crc kubenswrapper[4782]: I1124 12:53:45.920822 4782 scope.go:117] "RemoveContainer" containerID="3f7af836540877b1356f36ec29b8ba93d46cbeb89f1a2c0c768054d4bfa6feea" Nov 24 12:54:00 crc kubenswrapper[4782]: I1124 12:54:00.410158 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:54:00 crc kubenswrapper[4782]: I1124 12:54:00.410663 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:54:30 crc kubenswrapper[4782]: I1124 12:54:30.410651 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:54:30 crc kubenswrapper[4782]: I1124 12:54:30.411185 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:54:30 crc kubenswrapper[4782]: I1124 12:54:30.411235 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 12:54:30 crc kubenswrapper[4782]: I1124 12:54:30.412101 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:54:30 crc kubenswrapper[4782]: I1124 12:54:30.412174 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" gracePeriod=600 Nov 24 12:54:30 crc kubenswrapper[4782]: E1124 12:54:30.541395 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:54:31 crc kubenswrapper[4782]: I1124 12:54:31.268282 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" exitCode=0 Nov 24 12:54:31 crc kubenswrapper[4782]: I1124 12:54:31.268345 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c"} Nov 24 12:54:31 crc kubenswrapper[4782]: I1124 12:54:31.268507 4782 scope.go:117] "RemoveContainer" containerID="e327dfe85cc20d32dff283802dfe42d09ab68e0f3ad1c8b5f638e24d0843354c" Nov 24 12:54:31 crc kubenswrapper[4782]: I1124 12:54:31.270123 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:54:31 crc kubenswrapper[4782]: E1124 12:54:31.273350 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:54:44 crc kubenswrapper[4782]: I1124 12:54:44.491134 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:54:44 crc kubenswrapper[4782]: E1124 12:54:44.491902 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:54:58 crc kubenswrapper[4782]: I1124 12:54:58.491621 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:54:58 crc kubenswrapper[4782]: E1124 12:54:58.492582 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:55:13 crc kubenswrapper[4782]: I1124 12:55:13.490901 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:55:13 crc kubenswrapper[4782]: E1124 12:55:13.491755 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:55:26 crc kubenswrapper[4782]: I1124 12:55:26.490541 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:55:26 crc kubenswrapper[4782]: E1124 12:55:26.491240 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:55:41 crc kubenswrapper[4782]: I1124 12:55:41.499146 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:55:41 crc kubenswrapper[4782]: E1124 12:55:41.499872 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:55:56 crc kubenswrapper[4782]: I1124 12:55:56.491616 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:55:56 crc kubenswrapper[4782]: E1124 12:55:56.492501 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:56:10 crc kubenswrapper[4782]: I1124 12:56:10.491483 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:56:10 crc kubenswrapper[4782]: E1124 12:56:10.492250 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:56:23 crc kubenswrapper[4782]: I1124 12:56:23.490935 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:56:23 crc kubenswrapper[4782]: E1124 12:56:23.491809 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.499890 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zb8ql/must-gather-kgfh9"] Nov 24 12:56:24 crc kubenswrapper[4782]: E1124 12:56:24.500297 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="copy" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500308 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="copy" Nov 24 12:56:24 crc kubenswrapper[4782]: E1124 12:56:24.500337 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="registry-server" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500343 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="registry-server" Nov 24 12:56:24 crc kubenswrapper[4782]: E1124 12:56:24.500361 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="gather" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500389 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="gather" Nov 24 12:56:24 crc kubenswrapper[4782]: E1124 12:56:24.500397 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="extract-content" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500404 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="extract-content" Nov 24 12:56:24 crc kubenswrapper[4782]: E1124 12:56:24.500414 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="extract-utilities" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500419 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="extract-utilities" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500659 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28b07c9-871a-416e-8eb0-7ada07825bac" containerName="registry-server" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500689 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="copy" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.500696 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a964d21-2595-47ee-ae52-2d0677bd25eb" containerName="gather" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.501885 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.504279 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zb8ql"/"openshift-service-ca.crt" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.505408 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zb8ql"/"kube-root-ca.crt" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.572740 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zb8ql/must-gather-kgfh9"] Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.680333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.680771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5wk\" (UniqueName: \"kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.783155 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.783234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz5wk\" (UniqueName: \"kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.783714 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.819861 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz5wk\" (UniqueName: \"kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk\") pod \"must-gather-kgfh9\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:24 crc kubenswrapper[4782]: I1124 12:56:24.823113 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 12:56:25 crc kubenswrapper[4782]: I1124 12:56:25.506347 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zb8ql/must-gather-kgfh9"] Nov 24 12:56:26 crc kubenswrapper[4782]: I1124 12:56:26.328406 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" event={"ID":"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd","Type":"ContainerStarted","Data":"b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968"} Nov 24 12:56:26 crc kubenswrapper[4782]: I1124 12:56:26.329033 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" event={"ID":"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd","Type":"ContainerStarted","Data":"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6"} Nov 24 12:56:26 crc kubenswrapper[4782]: I1124 12:56:26.329046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" event={"ID":"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd","Type":"ContainerStarted","Data":"e8206502d1ced84efaf9ec8fcce6358c0a0d729a38e89024eff93d7e085e9dd0"} Nov 24 12:56:26 crc kubenswrapper[4782]: I1124 12:56:26.353598 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" podStartSLOduration=2.3535764759999998 podStartE2EDuration="2.353576476s" podCreationTimestamp="2025-11-24 12:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:56:26.346452456 +0000 UTC m=+3635.590286245" watchObservedRunningTime="2025-11-24 12:56:26.353576476 +0000 UTC m=+3635.597410255" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.685851 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-77l7b"] Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.688155 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.689914 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zb8ql"/"default-dockercfg-w7sgg" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.817888 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89vl\" (UniqueName: \"kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.818232 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.919944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89vl\" (UniqueName: \"kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.920035 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.920174 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:29 crc kubenswrapper[4782]: I1124 12:56:29.940135 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89vl\" (UniqueName: \"kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl\") pod \"crc-debug-77l7b\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:30 crc kubenswrapper[4782]: I1124 12:56:30.013221 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:56:30 crc kubenswrapper[4782]: I1124 12:56:30.367482 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" event={"ID":"d8c9b736-aad9-42e9-b569-46777d58137a","Type":"ContainerStarted","Data":"13daba3927b20e0516ec7933234ebf7d42a49e6db0c1a13bd98dbd1c8c0cab96"} Nov 24 12:56:30 crc kubenswrapper[4782]: I1124 12:56:30.367842 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" event={"ID":"d8c9b736-aad9-42e9-b569-46777d58137a","Type":"ContainerStarted","Data":"18119e4cd4a3005c0ab57c5babfd88a8775afab02d914f89ef6c1d1ba701251b"} Nov 24 12:56:30 crc kubenswrapper[4782]: I1124 12:56:30.381719 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" podStartSLOduration=1.381658942 podStartE2EDuration="1.381658942s" podCreationTimestamp="2025-11-24 12:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:56:30.379841724 +0000 UTC m=+3639.623675493" watchObservedRunningTime="2025-11-24 12:56:30.381658942 +0000 UTC m=+3639.625492711" Nov 24 12:56:35 crc kubenswrapper[4782]: I1124 12:56:35.490979 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:56:35 crc kubenswrapper[4782]: E1124 12:56:35.491735 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:56:48 crc kubenswrapper[4782]: I1124 12:56:48.491213 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:56:48 crc kubenswrapper[4782]: E1124 12:56:48.492041 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:56:59 crc kubenswrapper[4782]: I1124 12:56:59.491888 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:56:59 crc kubenswrapper[4782]: E1124 12:56:59.492664 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:57:06 crc kubenswrapper[4782]: I1124 12:57:06.732715 4782 generic.go:334] "Generic (PLEG): container finished" podID="d8c9b736-aad9-42e9-b569-46777d58137a" containerID="13daba3927b20e0516ec7933234ebf7d42a49e6db0c1a13bd98dbd1c8c0cab96" exitCode=0 Nov 24 12:57:06 crc kubenswrapper[4782]: I1124 12:57:06.732799 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" event={"ID":"d8c9b736-aad9-42e9-b569-46777d58137a","Type":"ContainerDied","Data":"13daba3927b20e0516ec7933234ebf7d42a49e6db0c1a13bd98dbd1c8c0cab96"} Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.869512 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.927904 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-77l7b"] Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.935578 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-77l7b"] Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.974317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g89vl\" (UniqueName: \"kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl\") pod \"d8c9b736-aad9-42e9-b569-46777d58137a\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.974735 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host\") pod \"d8c9b736-aad9-42e9-b569-46777d58137a\" (UID: \"d8c9b736-aad9-42e9-b569-46777d58137a\") " Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.975277 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host" (OuterVolumeSpecName: "host") pod "d8c9b736-aad9-42e9-b569-46777d58137a" (UID: "d8c9b736-aad9-42e9-b569-46777d58137a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:57:07 crc kubenswrapper[4782]: I1124 12:57:07.981254 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl" (OuterVolumeSpecName: "kube-api-access-g89vl") pod "d8c9b736-aad9-42e9-b569-46777d58137a" (UID: "d8c9b736-aad9-42e9-b569-46777d58137a"). InnerVolumeSpecName "kube-api-access-g89vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:08 crc kubenswrapper[4782]: I1124 12:57:08.077121 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8c9b736-aad9-42e9-b569-46777d58137a-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:08 crc kubenswrapper[4782]: I1124 12:57:08.077163 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g89vl\" (UniqueName: \"kubernetes.io/projected/d8c9b736-aad9-42e9-b569-46777d58137a-kube-api-access-g89vl\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:08 crc kubenswrapper[4782]: I1124 12:57:08.752219 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18119e4cd4a3005c0ab57c5babfd88a8775afab02d914f89ef6c1d1ba701251b" Nov 24 12:57:08 crc kubenswrapper[4782]: I1124 12:57:08.752437 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-77l7b" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.209830 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-l86vp"] Nov 24 12:57:09 crc kubenswrapper[4782]: E1124 12:57:09.210275 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c9b736-aad9-42e9-b569-46777d58137a" containerName="container-00" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.210314 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c9b736-aad9-42e9-b569-46777d58137a" containerName="container-00" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.210516 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c9b736-aad9-42e9-b569-46777d58137a" containerName="container-00" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.211150 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.213283 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zb8ql"/"default-dockercfg-w7sgg" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.298697 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rlf\" (UniqueName: \"kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.298784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.400454 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rlf\" (UniqueName: \"kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.400900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.400978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.431651 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rlf\" (UniqueName: \"kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf\") pod \"crc-debug-l86vp\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.501915 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c9b736-aad9-42e9-b569-46777d58137a" path="/var/lib/kubelet/pods/d8c9b736-aad9-42e9-b569-46777d58137a/volumes" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.531096 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:09 crc kubenswrapper[4782]: I1124 12:57:09.763485 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" event={"ID":"e435f164-dcaf-4d88-9b1d-748e95a95486","Type":"ContainerStarted","Data":"0c630434a2b026ba88b0d083d419f6e838a097ba86dbbcc59bf4814769bea0d2"} Nov 24 12:57:10 crc kubenswrapper[4782]: I1124 12:57:10.773964 4782 generic.go:334] "Generic (PLEG): container finished" podID="e435f164-dcaf-4d88-9b1d-748e95a95486" containerID="3f0999e46130182310b9dd1070a6e3df00e8f4a25733aa3e20dc41630a06cda2" exitCode=0 Nov 24 12:57:10 crc kubenswrapper[4782]: I1124 12:57:10.774567 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" event={"ID":"e435f164-dcaf-4d88-9b1d-748e95a95486","Type":"ContainerDied","Data":"3f0999e46130182310b9dd1070a6e3df00e8f4a25733aa3e20dc41630a06cda2"} Nov 24 12:57:11 crc kubenswrapper[4782]: I1124 12:57:11.194951 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-l86vp"] Nov 24 12:57:11 crc kubenswrapper[4782]: I1124 12:57:11.202910 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-l86vp"] Nov 24 12:57:11 crc kubenswrapper[4782]: I1124 12:57:11.500228 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:57:11 crc kubenswrapper[4782]: E1124 12:57:11.500592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:57:11 crc kubenswrapper[4782]: I1124 12:57:11.915196 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.022441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host\") pod \"e435f164-dcaf-4d88-9b1d-748e95a95486\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.022693 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rlf\" (UniqueName: \"kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf\") pod \"e435f164-dcaf-4d88-9b1d-748e95a95486\" (UID: \"e435f164-dcaf-4d88-9b1d-748e95a95486\") " Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.022821 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host" (OuterVolumeSpecName: "host") pod "e435f164-dcaf-4d88-9b1d-748e95a95486" (UID: "e435f164-dcaf-4d88-9b1d-748e95a95486"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.023141 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e435f164-dcaf-4d88-9b1d-748e95a95486-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.034685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf" (OuterVolumeSpecName: "kube-api-access-68rlf") pod "e435f164-dcaf-4d88-9b1d-748e95a95486" (UID: "e435f164-dcaf-4d88-9b1d-748e95a95486"). InnerVolumeSpecName "kube-api-access-68rlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.125406 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rlf\" (UniqueName: \"kubernetes.io/projected/e435f164-dcaf-4d88-9b1d-748e95a95486-kube-api-access-68rlf\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.462023 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-zqf9c"] Nov 24 12:57:12 crc kubenswrapper[4782]: E1124 12:57:12.462626 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e435f164-dcaf-4d88-9b1d-748e95a95486" containerName="container-00" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.462642 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e435f164-dcaf-4d88-9b1d-748e95a95486" containerName="container-00" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.462828 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e435f164-dcaf-4d88-9b1d-748e95a95486" containerName="container-00" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.463390 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.635028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.635082 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbzr\" (UniqueName: \"kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.737250 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.737321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdbzr\" (UniqueName: \"kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.737437 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.755973 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdbzr\" (UniqueName: \"kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr\") pod \"crc-debug-zqf9c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.777715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.796729 4782 scope.go:117] "RemoveContainer" containerID="3f0999e46130182310b9dd1070a6e3df00e8f4a25733aa3e20dc41630a06cda2" Nov 24 12:57:12 crc kubenswrapper[4782]: I1124 12:57:12.796914 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-l86vp" Nov 24 12:57:12 crc kubenswrapper[4782]: W1124 12:57:12.849667 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17aaa62_0afb_4d82_bb25_c1ab25f42c3c.slice/crio-f43f6c8a4f094f794739629accfec9ba85f49aea3a6bf4216cf25b9c7526b4c4 WatchSource:0}: Error finding container f43f6c8a4f094f794739629accfec9ba85f49aea3a6bf4216cf25b9c7526b4c4: Status 404 returned error can't find the container with id f43f6c8a4f094f794739629accfec9ba85f49aea3a6bf4216cf25b9c7526b4c4 Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.504448 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e435f164-dcaf-4d88-9b1d-748e95a95486" path="/var/lib/kubelet/pods/e435f164-dcaf-4d88-9b1d-748e95a95486/volumes" Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.808995 4782 generic.go:334] "Generic (PLEG): container finished" podID="e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" containerID="6d61998cfec72f4dbf21b08f0367eb78d51ff64dd094618b6577abf163500526" exitCode=0 Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.809036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" event={"ID":"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c","Type":"ContainerDied","Data":"6d61998cfec72f4dbf21b08f0367eb78d51ff64dd094618b6577abf163500526"} Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.809063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" event={"ID":"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c","Type":"ContainerStarted","Data":"f43f6c8a4f094f794739629accfec9ba85f49aea3a6bf4216cf25b9c7526b4c4"} Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.839875 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-zqf9c"] Nov 24 12:57:13 crc kubenswrapper[4782]: I1124 12:57:13.847142 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zb8ql/crc-debug-zqf9c"] Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.935673 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.989174 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host\") pod \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.989302 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdbzr\" (UniqueName: \"kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr\") pod \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\" (UID: \"e17aaa62-0afb-4d82-bb25-c1ab25f42c3c\") " Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.989296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host" (OuterVolumeSpecName: "host") pod "e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" (UID: "e17aaa62-0afb-4d82-bb25-c1ab25f42c3c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.989969 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:14 crc kubenswrapper[4782]: I1124 12:57:14.994341 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr" (OuterVolumeSpecName: "kube-api-access-kdbzr") pod "e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" (UID: "e17aaa62-0afb-4d82-bb25-c1ab25f42c3c"). InnerVolumeSpecName "kube-api-access-kdbzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:15 crc kubenswrapper[4782]: I1124 12:57:15.092401 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdbzr\" (UniqueName: \"kubernetes.io/projected/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c-kube-api-access-kdbzr\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:15 crc kubenswrapper[4782]: I1124 12:57:15.501272 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" path="/var/lib/kubelet/pods/e17aaa62-0afb-4d82-bb25-c1ab25f42c3c/volumes" Nov 24 12:57:15 crc kubenswrapper[4782]: I1124 12:57:15.829619 4782 scope.go:117] "RemoveContainer" containerID="6d61998cfec72f4dbf21b08f0367eb78d51ff64dd094618b6577abf163500526" Nov 24 12:57:15 crc kubenswrapper[4782]: I1124 12:57:15.829696 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/crc-debug-zqf9c" Nov 24 12:57:26 crc kubenswrapper[4782]: I1124 12:57:26.490874 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:57:26 crc kubenswrapper[4782]: E1124 12:57:26.491644 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:57:41 crc kubenswrapper[4782]: I1124 12:57:41.500214 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:57:41 crc kubenswrapper[4782]: E1124 12:57:41.500942 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.291266 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5584bf45bd-6fhhg_de56c6c9-b982-419d-be5c-97f1f9379747/barbican-api/0.log" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.441029 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5584bf45bd-6fhhg_de56c6c9-b982-419d-be5c-97f1f9379747/barbican-api-log/0.log" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.570929 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-567dd88794-rs7lm_4f2c93b3-0f72-4e4e-bc85-c719e2e9954b/barbican-keystone-listener/0.log" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.807329 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-567dd88794-rs7lm_4f2c93b3-0f72-4e4e-bc85-c719e2e9954b/barbican-keystone-listener-log/0.log" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.834625 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8668478d95-lb5cp_b310f8bf-62fa-4955-984a-1df40c4e3a38/barbican-worker/0.log" Nov 24 12:57:42 crc kubenswrapper[4782]: I1124 12:57:42.860131 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8668478d95-lb5cp_b310f8bf-62fa-4955-984a-1df40c4e3a38/barbican-worker-log/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.343895 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-l9bvw_672fd75c-f2f7-4396-a11e-e4e5abf8ab13/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.399349 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/ceilometer-central-agent/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.583181 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/proxy-httpd/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.635754 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/ceilometer-notification-agent/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.682320 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0a6e941b-a7bd-4365-88eb-5daaa2b590ab/sg-core/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.867807 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_25280233-1f0e-44f9-80ce-48d3d2413861/cinder-api-log/0.log" Nov 24 12:57:43 crc kubenswrapper[4782]: I1124 12:57:43.892364 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_25280233-1f0e-44f9-80ce-48d3d2413861/cinder-api/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.075341 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_611df7d1-ff5a-4747-b3ed-be19deedd3c6/cinder-scheduler/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.147850 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_611df7d1-ff5a-4747-b3ed-be19deedd3c6/probe/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.243398 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pn6xt_521af29a-8b28-4633-adc5-857ca14e0312/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.390478 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s4tln_58220605-30a9-4d4f-b785-3e9edabcfb5c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.467512 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/init/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.737868 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/init/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.810310 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-qkj56_77f4f46d-6156-43bb-b49d-6371cb8921c1/dnsmasq-dns/0.log" Nov 24 12:57:44 crc kubenswrapper[4782]: I1124 12:57:44.906751 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ccf95_44a60db3-16e5-4ad6-8ccd-2a3da3b6dbb7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.065732 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c5c3127c-bed5-4d35-b535-fc6ca3f79e86/glance-httpd/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.131056 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c5c3127c-bed5-4d35-b535-fc6ca3f79e86/glance-log/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.315897 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_533bc3bf-a4ed-4133-b448-9888eeea6416/glance-log/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.354826 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_533bc3bf-a4ed-4133-b448-9888eeea6416/glance-httpd/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.670163 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon/2.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.720745 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon/1.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.894882 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6574f9bb76-jkv6h_41a8247d-b0d2-4a46-b108-bc260db36e11/horizon-log/0.log" Nov 24 12:57:45 crc kubenswrapper[4782]: I1124 12:57:45.945132 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-w9859_e7525d3d-3415-44de-a76a-e6de73a7dc1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:46 crc kubenswrapper[4782]: I1124 12:57:46.045924 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w7gqb_ca3c5d6f-8a83-4d96-b5a5-dc0bcec2a913/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:46 crc kubenswrapper[4782]: I1124 12:57:46.418694 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6f389ec5-41d8-4afb-9df2-792618e38c30/kube-state-metrics/0.log" Nov 24 12:57:46 crc kubenswrapper[4782]: I1124 12:57:46.439309 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54579c9c49-nkmgh_4b6ef93c-ca86-4207-8cba-0cd8bc486889/keystone-api/0.log" Nov 24 12:57:46 crc kubenswrapper[4782]: I1124 12:57:46.629853 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hz4dk_1af97733-205a-4fc3-804c-77517c7053db/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:47 crc kubenswrapper[4782]: I1124 12:57:47.093909 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-665949bbb5-7lm9x_6046c36e-6c5a-49e4-850b-d15d227c7851/neutron-api/0.log" Nov 24 12:57:47 crc kubenswrapper[4782]: I1124 12:57:47.157194 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-665949bbb5-7lm9x_6046c36e-6c5a-49e4-850b-d15d227c7851/neutron-httpd/0.log" Nov 24 12:57:47 crc kubenswrapper[4782]: I1124 12:57:47.305697 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hph46_891636a5-0fde-4436-b3ab-7831d7420439/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:47 crc kubenswrapper[4782]: I1124 12:57:47.974040 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39c39c96-99d5-4e76-9c99-20d1310fe1ac/nova-api-log/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.057345 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39c39c96-99d5-4e76-9c99-20d1310fe1ac/nova-api-api/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.106349 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_79cba7f4-7d61-489a-9c67-41a7a0dc1c28/nova-cell0-conductor-conductor/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.432236 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a1d44b32-67f5-4294-962e-e4c2821714f0/nova-cell1-conductor-conductor/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.568812 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4154f325-2ba9-4e67-a59e-d5e71d9f8cd8/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.691962 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5vt47_6cd5f290-1276-4bbf-a7c0-9075e776dd0b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:48 crc kubenswrapper[4782]: I1124 12:57:48.926855 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ef6b2c28-7003-45fe-922e-40b6f5c2a43a/nova-metadata-log/0.log" Nov 24 12:57:49 crc kubenswrapper[4782]: I1124 12:57:49.212244 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/mysql-bootstrap/0.log" Nov 24 12:57:49 crc kubenswrapper[4782]: I1124 12:57:49.242988 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c234628b-dc63-4176-b7d6-5506de5cd15b/nova-scheduler-scheduler/0.log" Nov 24 12:57:49 crc kubenswrapper[4782]: I1124 12:57:49.595933 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/mysql-bootstrap/0.log" Nov 24 12:57:49 crc kubenswrapper[4782]: I1124 12:57:49.624166 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b66c75fd-ec79-4997-9e45-70865f612c8f/galera/0.log" Nov 24 12:57:49 crc kubenswrapper[4782]: I1124 12:57:49.840070 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/mysql-bootstrap/0.log" Nov 24 12:57:50 crc kubenswrapper[4782]: I1124 12:57:50.143350 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/galera/0.log" Nov 24 12:57:50 crc kubenswrapper[4782]: I1124 12:57:50.150478 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b33a2a59-697b-4973-b01d-5933d2319593/mysql-bootstrap/0.log" Nov 24 12:57:50 crc kubenswrapper[4782]: I1124 12:57:50.302288 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ef6b2c28-7003-45fe-922e-40b6f5c2a43a/nova-metadata-metadata/0.log" Nov 24 12:57:50 crc kubenswrapper[4782]: I1124 12:57:50.540761 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c7c7aa63-55ae-4525-a262-c5c9d08e4fe7/openstackclient/0.log" Nov 24 12:57:50 crc kubenswrapper[4782]: I1124 12:57:50.580917 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-m6c9b_a62553ed-d73b-49c8-be06-e9ad0542d8da/ovn-controller/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.079362 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hjt97_95804fb9-455c-4226-acb2-97418cd75b7e/openstack-network-exporter/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.082417 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server-init/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.564115 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovs-vswitchd/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.572556 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server-init/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.627002 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7bqn5_4836b782-f203-42c9-95f7-58a33a861aa1/ovsdb-server/0.log" Nov 24 12:57:51 crc kubenswrapper[4782]: I1124 12:57:51.914120 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0/ovn-northd/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.005703 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cbeba12-bb6c-4c9e-92d9-8e97cf3d18e0/openstack-network-exporter/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.019365 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kc6qz_b30e01d5-eac0-49f4-88f5-bf4b5758bf1d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.327381 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75e93622-05f8-4afc-868b-0a6f157fa62b/openstack-network-exporter/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.386109 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75e93622-05f8-4afc-868b-0a6f157fa62b/ovsdbserver-nb/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.554261 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3/openstack-network-exporter/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.587683 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d8f0a45-8bf4-4c35-a6bf-1df2b2f860c3/ovsdbserver-sb/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.872786 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548457c99b-pdf6j_b571494b-eadd-44e4-b7cd-122dbbaddef5/placement-api/0.log" Nov 24 12:57:52 crc kubenswrapper[4782]: I1124 12:57:52.961059 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548457c99b-pdf6j_b571494b-eadd-44e4-b7cd-122dbbaddef5/placement-log/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.064142 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/setup-container/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.310337 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/rabbitmq/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.378742 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/setup-container/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.380747 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_39483c87-eb4a-4adf-81de-ae60ec596fe8/setup-container/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.664723 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/setup-container/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.805605 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63a3a3ca-d486-4f23-ae6b-3bc4cb02f8a9/rabbitmq/0.log" Nov 24 12:57:53 crc kubenswrapper[4782]: I1124 12:57:53.918930 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h8ndk_f27d1d98-ecfa-4977-aa6c-abf87b9e244a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:54 crc kubenswrapper[4782]: I1124 12:57:54.054947 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d2g2l_0b6970e9-155c-4b80-98ee-9305e8b942f2/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:54 crc kubenswrapper[4782]: I1124 12:57:54.204735 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-b8m24_aa0cf12a-8750-4351-a6a7-e66bf1bb074c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:54 crc kubenswrapper[4782]: I1124 12:57:54.429342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6c48d_b11f38fd-d0b3-4272-8c87-921c1d40b832/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:54 crc kubenswrapper[4782]: I1124 12:57:54.660832 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6bsqg_c8f27e6b-2964-4a8b-b976-92fb6421705a/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.056685 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-556bd89d59-52m2m_a2fa4f6f-fc43-4b5c-af94-0534b54364d7/proxy-server/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.123618 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-556bd89d59-52m2m_a2fa4f6f-fc43-4b5c-af94-0534b54364d7/proxy-httpd/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.394571 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dnt8l_8b31b3d1-1239-45a8-9380-693d4ce10324/swift-ring-rebalance/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.423645 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-auditor/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.486605 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-reaper/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.490397 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:57:55 crc kubenswrapper[4782]: E1124 12:57:55.490758 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.685853 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-server/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.751498 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/account-replicator/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.783833 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-replicator/0.log" Nov 24 12:57:55 crc kubenswrapper[4782]: I1124 12:57:55.914166 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-auditor/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.014864 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-server/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.022584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-auditor/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.033985 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/container-updater/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.161520 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-expirer/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.320507 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-server/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.336190 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-replicator/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.348180 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/object-updater/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.393224 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/rsync/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.531524 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81dbdeba-8b69-4638-b076-29f9edaeffa6/swift-recon-cron/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.809241 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-snkfr_c9db5a23-263f-41cc-a1b6-b90df79aa8d2/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:56 crc kubenswrapper[4782]: I1124 12:57:56.858164 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bf2749fb-4ae8-43f8-847e-3d4528d4556a/tempest-tests-tempest-tests-runner/0.log" Nov 24 12:57:57 crc kubenswrapper[4782]: I1124 12:57:57.008228 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2cac3222-c76a-4e73-8333-38f146cec5c9/test-operator-logs-container/0.log" Nov 24 12:57:57 crc kubenswrapper[4782]: I1124 12:57:57.201822 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4mqxp_c153f0a7-9375-40ea-9d60-aad9c960a30a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:58:05 crc kubenswrapper[4782]: I1124 12:58:05.222553 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_579fda47-7251-4722-b19c-eadbf6aaba21/memcached/0.log" Nov 24 12:58:06 crc kubenswrapper[4782]: I1124 12:58:06.490783 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:58:06 crc kubenswrapper[4782]: E1124 12:58:06.491250 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:58:20 crc kubenswrapper[4782]: I1124 12:58:20.490612 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:58:20 crc kubenswrapper[4782]: E1124 12:58:20.491320 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:58:27 crc kubenswrapper[4782]: I1124 12:58:27.942358 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-kr5jc_9adec34d-0e3d-4f65-80b2-4ba1c0731be4/kube-rbac-proxy/0.log" Nov 24 12:58:28 crc kubenswrapper[4782]: I1124 12:58:28.084667 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-kr5jc_9adec34d-0e3d-4f65-80b2-4ba1c0731be4/manager/0.log" Nov 24 12:58:28 crc kubenswrapper[4782]: I1124 12:58:28.173789 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k5z2n_45393058-140b-48ea-9691-9bbe0740342b/kube-rbac-proxy/0.log" Nov 24 12:58:28 crc kubenswrapper[4782]: I1124 12:58:28.786766 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:58:28 crc kubenswrapper[4782]: I1124 12:58:28.861070 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k5z2n_45393058-140b-48ea-9691-9bbe0740342b/manager/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.002008 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.009125 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.045042 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.177492 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/pull/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.237971 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/util/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.245926 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ddb5fa85f02f640c34425527f223c3bab5d07f9684e762050c8f7fade7fgnqh_1d52cfc6-407b-4fc0-9ea9-126bd1aedc29/extract/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.398066 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-czqnv_5ace8cad-a0d4-4ba1-99f8-a097edd76a74/kube-rbac-proxy/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.471455 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-czqnv_5ace8cad-a0d4-4ba1-99f8-a097edd76a74/manager/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.572871 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-kmhd6_ba20509b-c083-42f1-bf39-be2ed4a463f7/kube-rbac-proxy/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.694256 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-kmhd6_ba20509b-c083-42f1-bf39-be2ed4a463f7/manager/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.803343 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wmsss_2ae11a51-1628-454f-8b78-77e9aaa2691b/kube-rbac-proxy/0.log" Nov 24 12:58:29 crc kubenswrapper[4782]: I1124 12:58:29.825210 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wmsss_2ae11a51-1628-454f-8b78-77e9aaa2691b/manager/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.331352 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-nqd5j_e6982d2e-f7d3-4374-bc66-7949d3bcc062/manager/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.338915 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-nqd5j_e6982d2e-f7d3-4374-bc66-7949d3bcc062/kube-rbac-proxy/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.521645 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fjggr_61b6c96b-b73c-47b5-8e05-988870f4587f/kube-rbac-proxy/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.629226 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-whlt5_6b6efe11-117c-42a3-baa5-b43b07557e43/kube-rbac-proxy/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.732674 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fjggr_61b6c96b-b73c-47b5-8e05-988870f4587f/manager/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.800021 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-whlt5_6b6efe11-117c-42a3-baa5-b43b07557e43/manager/0.log" Nov 24 12:58:30 crc kubenswrapper[4782]: I1124 12:58:30.906559 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b854ddf99-pb2wn_7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c/kube-rbac-proxy/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.018830 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b854ddf99-pb2wn_7c8a9d0f-29a6-4ff7-b145-c5a32b8f5b0c/manager/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.100877 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-ctr6x_ef74c0aa-ac31-49b1-861d-258fe0a3ddff/kube-rbac-proxy/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.222425 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-ctr6x_ef74c0aa-ac31-49b1-861d-258fe0a3ddff/manager/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.341511 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-57dk2_34e6f50e-248f-4ef3-a145-83ccb7616d0d/kube-rbac-proxy/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.368639 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-57dk2_34e6f50e-248f-4ef3-a145-83ccb7616d0d/manager/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.491775 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:58:31 crc kubenswrapper[4782]: E1124 12:58:31.492178 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.599979 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tmp5g_9e50a599-1a70-46c9-94a1-d3148778888d/kube-rbac-proxy/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.696747 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tmp5g_9e50a599-1a70-46c9-94a1-d3148778888d/manager/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.808048 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2hr2j_ec2d6fc2-5418-4263-a351-0422b2d5068d/kube-rbac-proxy/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.890121 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2hr2j_ec2d6fc2-5418-4263-a351-0422b2d5068d/manager/0.log" Nov 24 12:58:31 crc kubenswrapper[4782]: I1124 12:58:31.946090 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-kh9k5_a09a3b55-484e-461d-9f95-1e3279b323c5/kube-rbac-proxy/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.056840 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-kh9k5_a09a3b55-484e-461d-9f95-1e3279b323c5/manager/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.139360 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-6jvns_ecf25e22-396d-4c6d-9585-566ffc0d0092/kube-rbac-proxy/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.213454 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-6jvns_ecf25e22-396d-4c6d-9585-566ffc0d0092/manager/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.479050 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qkfdv_f26694c9-e51b-4e17-b20c-eafbb8164ba8/registry-server/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.544957 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bd4c479c8-db2zp_8aef8676-912f-4585-a5bb-a494867bf2e9/operator/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.758112 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7pb8k_a9dcd8ef-dbbf-43dc-97a0-e77d942ff589/kube-rbac-proxy/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.886948 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7pb8k_a9dcd8ef-dbbf-43dc-97a0-e77d942ff589/manager/0.log" Nov 24 12:58:32 crc kubenswrapper[4782]: I1124 12:58:32.967007 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k6n5f_60a3fcad-0c5a-4be2-b89b-4d143d3a8e62/kube-rbac-proxy/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.069332 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k6n5f_60a3fcad-0c5a-4be2-b89b-4d143d3a8e62/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.258927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4ltpx_f59f76fb-e7fa-4c9f-aec2-9af6e6aac15a/operator/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.330076 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pdvh7_5f8b3ed3-fba7-4a0e-8245-f822c548082e/kube-rbac-proxy/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.410832 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-689b7ddfcc-9brt2_55628383-51b4-4c77-ac10-476769165984/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.545837 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pdvh7_5f8b3ed3-fba7-4a0e-8245-f822c548082e/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.563984 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-n6cqm_76bc751e-4645-4cb1-bdfe-7e3c6732505b/kube-rbac-proxy/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.661217 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-n6cqm_76bc751e-4645-4cb1-bdfe-7e3c6732505b/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.768767 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8xf2r_a0f8d31c-392e-468d-9a86-b5a482dbc6fb/kube-rbac-proxy/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.807435 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8xf2r_a0f8d31c-392e-468d-9a86-b5a482dbc6fb/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.868856 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-clxm4_19e8c85c-675d-433f-8346-878034f14d24/manager/0.log" Nov 24 12:58:33 crc kubenswrapper[4782]: I1124 12:58:33.891559 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-clxm4_19e8c85c-675d-433f-8346-878034f14d24/kube-rbac-proxy/0.log" Nov 24 12:58:46 crc kubenswrapper[4782]: I1124 12:58:46.490716 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:58:46 crc kubenswrapper[4782]: E1124 12:58:46.491654 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:58:51 crc kubenswrapper[4782]: I1124 12:58:51.261070 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-662wl_3d0cecf6-1037-494f-a783-682ba2b70960/control-plane-machine-set-operator/0.log" Nov 24 12:58:51 crc kubenswrapper[4782]: I1124 12:58:51.481603 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bhx6w_9babc041-e14e-4226-aebc-50e771089c3c/kube-rbac-proxy/0.log" Nov 24 12:58:51 crc kubenswrapper[4782]: I1124 12:58:51.514181 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bhx6w_9babc041-e14e-4226-aebc-50e771089c3c/machine-api-operator/0.log" Nov 24 12:59:00 crc kubenswrapper[4782]: I1124 12:59:00.490935 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:59:00 crc kubenswrapper[4782]: E1124 12:59:00.491858 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:59:04 crc kubenswrapper[4782]: I1124 12:59:04.092124 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-4dr44_5598822c-dc55-41dd-bb17-7657376575e7/cert-manager-controller/0.log" Nov 24 12:59:04 crc kubenswrapper[4782]: I1124 12:59:04.171717 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-tmfht_3df46084-4a7d-46f9-9b83-0980a55f1752/cert-manager-cainjector/0.log" Nov 24 12:59:04 crc kubenswrapper[4782]: I1124 12:59:04.238485 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2k9fj_4070eb87-d044-4a58-8a71-1a9a53cc0ad2/cert-manager-webhook/0.log" Nov 24 12:59:14 crc kubenswrapper[4782]: I1124 12:59:14.490413 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:59:14 crc kubenswrapper[4782]: E1124 12:59:14.490989 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.269557 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-w8qrk_8e70ec59-8a74-4f10-bddd-f30177d331f4/nmstate-console-plugin/0.log" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.491784 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2fms8_ec671193-a1fa-4295-8ac6-6f2df89a3687/nmstate-handler/0.log" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.529690 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zmbn9_7b438b30-337c-4f13-8973-2a170ccb7a2a/kube-rbac-proxy/0.log" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.633014 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zmbn9_7b438b30-337c-4f13-8973-2a170ccb7a2a/nmstate-metrics/0.log" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.761350 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-b4h4d_e2997dd2-a58c-48d1-b003-5e90a0df8a2d/nmstate-operator/0.log" Nov 24 12:59:17 crc kubenswrapper[4782]: I1124 12:59:17.837056 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-swb2c_93ae4c19-bf24-48ea-96db-36a5bdd72d01/nmstate-webhook/0.log" Nov 24 12:59:29 crc kubenswrapper[4782]: I1124 12:59:29.495566 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:59:29 crc kubenswrapper[4782]: E1124 12:59:29.496633 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xg6cl_openshift-machine-config-operator(078c4346-9841-4870-a8b8-de6911b24498)\"" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" Nov 24 12:59:33 crc kubenswrapper[4782]: I1124 12:59:33.874341 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7tjql_9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95/controller/0.log" Nov 24 12:59:33 crc kubenswrapper[4782]: I1124 12:59:33.919505 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7tjql_9dcb8c1b-c160-4ee0-85bf-2ef919e2bc95/kube-rbac-proxy/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.119647 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.287976 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.307303 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.327014 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.442365 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.646854 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.664708 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.948431 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:59:34 crc kubenswrapper[4782]: I1124 12:59:34.958763 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.175507 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-frr-files/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.187764 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-reloader/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.229018 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/cp-metrics/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.246491 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/controller/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.443668 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/frr-metrics/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.550972 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/kube-rbac-proxy/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.603519 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/kube-rbac-proxy-frr/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.708646 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/reloader/0.log" Nov 24 12:59:35 crc kubenswrapper[4782]: I1124 12:59:35.865717 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-krmz8_d4164755-f714-472a-9c05-c9978612bce6/frr-k8s-webhook-server/0.log" Nov 24 12:59:36 crc kubenswrapper[4782]: I1124 12:59:36.263864 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cf8447d56-ls4f7_e57d9da0-c929-401e-9311-7c2caa53e702/manager/0.log" Nov 24 12:59:36 crc kubenswrapper[4782]: I1124 12:59:36.364256 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8688c9769b-zmxnc_a1bc3416-5d2f-48bf-b8b9-2aa77cd3e661/webhook-server/0.log" Nov 24 12:59:36 crc kubenswrapper[4782]: I1124 12:59:36.510291 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bk5fw_680910b6-d069-4019-8024-f483987e8347/frr/0.log" Nov 24 12:59:36 crc kubenswrapper[4782]: I1124 12:59:36.563774 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xrpzs_cbc9434a-55e0-497d-9658-7531208c412e/kube-rbac-proxy/0.log" Nov 24 12:59:36 crc kubenswrapper[4782]: I1124 12:59:36.905557 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xrpzs_cbc9434a-55e0-497d-9658-7531208c412e/speaker/0.log" Nov 24 12:59:43 crc kubenswrapper[4782]: I1124 12:59:43.491765 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 12:59:44 crc kubenswrapper[4782]: I1124 12:59:44.248468 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"522ed81049ad1ce759e380346fc7b8a535ef31cac1400a6ff429b0809e977ad3"} Nov 24 12:59:50 crc kubenswrapper[4782]: I1124 12:59:50.506686 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:59:50 crc kubenswrapper[4782]: I1124 12:59:50.809988 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:59:50 crc kubenswrapper[4782]: I1124 12:59:50.825310 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:59:50 crc kubenswrapper[4782]: I1124 12:59:50.838916 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.063621 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/extract/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.091041 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/pull/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.111544 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egpzfc_0550e456-35df-49b1-937c-5477c7e72543/util/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.294212 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.532924 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.556333 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.599323 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.757796 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-utilities/0.log" Nov 24 12:59:51 crc kubenswrapper[4782]: I1124 12:59:51.797362 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/extract-content/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.131943 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9dfdl_991dd9ae-cb8c-4f12-8568-fc7de0593214/registry-server/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.379785 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-utilities/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.471054 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-utilities/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.556066 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-content/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.557535 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-content/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.765714 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-utilities/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.941811 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/registry-server/0.log" Nov 24 12:59:52 crc kubenswrapper[4782]: I1124 12:59:52.951159 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5wvq_b0d5387e-15d2-42c7-9717-0e2e3e30ee09/extract-content/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.008689 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.241555 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.295627 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.302548 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.551528 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/extract/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.597422 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/pull/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.658785 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6z2zjw_41bc6902-66ca-49f1-8796-2170eb3e1e00/util/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.844282 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gzb6p_b5fb7f2d-5841-44a3-a7cc-41b44c66cd73/marketplace-operator/0.log" Nov 24 12:59:53 crc kubenswrapper[4782]: I1124 12:59:53.904325 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.141213 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.168585 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.190036 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.460171 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-content/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.484986 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/extract-utilities/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.508387 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xspgw_2a0d4e14-9ca7-47ac-aec7-e209cc92bbf9/registry-server/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.690502 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.875227 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.892663 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:59:54 crc kubenswrapper[4782]: I1124 12:59:54.933478 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:59:55 crc kubenswrapper[4782]: I1124 12:59:55.100230 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-content/0.log" Nov 24 12:59:55 crc kubenswrapper[4782]: I1124 12:59:55.131694 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/extract-utilities/0.log" Nov 24 12:59:55 crc kubenswrapper[4782]: I1124 12:59:55.681325 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6vswm_73cb42af-6271-49a9-8bc3-eb50ef39a50d/registry-server/0.log" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.137213 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf"] Nov 24 13:00:00 crc kubenswrapper[4782]: E1124 13:00:00.138140 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.138153 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.138349 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17aaa62-0afb-4d82-bb25-c1ab25f42c3c" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.139003 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.141198 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.141315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.154120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.154212 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82ml\" (UniqueName: \"kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.154266 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.157148 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf"] Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.256122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.256246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82ml\" (UniqueName: \"kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.256334 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.257009 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.263811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.280963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82ml\" (UniqueName: \"kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml\") pod \"collect-profiles-29399820-4htbf\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.461557 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:00 crc kubenswrapper[4782]: I1124 13:00:00.979779 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf"] Nov 24 13:00:01 crc kubenswrapper[4782]: I1124 13:00:01.383482 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" event={"ID":"036963d7-60ca-4f52-9a77-8b908593a999","Type":"ContainerStarted","Data":"4a20032355eb5719a0442e68b31e390158dd95a5ae1672ede6297b8821566529"} Nov 24 13:00:01 crc kubenswrapper[4782]: I1124 13:00:01.383525 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" event={"ID":"036963d7-60ca-4f52-9a77-8b908593a999","Type":"ContainerStarted","Data":"9c795cd6d62a5b842f7ef1a053ca40f15cc46d6de9950ec6e0a23a7727309ac5"} Nov 24 13:00:01 crc kubenswrapper[4782]: I1124 13:00:01.405963 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" podStartSLOduration=1.405945032 podStartE2EDuration="1.405945032s" podCreationTimestamp="2025-11-24 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:00:01.399099048 +0000 UTC m=+3850.642932807" watchObservedRunningTime="2025-11-24 13:00:01.405945032 +0000 UTC m=+3850.649778801" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.206313 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pzkn2"] Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.209294 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.215027 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzkn2"] Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.393216 4782 generic.go:334] "Generic (PLEG): container finished" podID="036963d7-60ca-4f52-9a77-8b908593a999" containerID="4a20032355eb5719a0442e68b31e390158dd95a5ae1672ede6297b8821566529" exitCode=0 Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.393254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" event={"ID":"036963d7-60ca-4f52-9a77-8b908593a999","Type":"ContainerDied","Data":"4a20032355eb5719a0442e68b31e390158dd95a5ae1672ede6297b8821566529"} Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.401635 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6x8x\" (UniqueName: \"kubernetes.io/projected/b4ca04a6-3d70-405e-bc74-75b52a178e4a-kube-api-access-v6x8x\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.401698 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-utilities\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.401736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-catalog-content\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.404611 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.406269 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.423112 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503209 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6x8x\" (UniqueName: \"kubernetes.io/projected/b4ca04a6-3d70-405e-bc74-75b52a178e4a-kube-api-access-v6x8x\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503291 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-utilities\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503327 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-catalog-content\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503354 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.503420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwfch\" (UniqueName: \"kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.505189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-utilities\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.505644 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ca04a6-3d70-405e-bc74-75b52a178e4a-catalog-content\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.537906 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6x8x\" (UniqueName: \"kubernetes.io/projected/b4ca04a6-3d70-405e-bc74-75b52a178e4a-kube-api-access-v6x8x\") pod \"redhat-operators-pzkn2\" (UID: \"b4ca04a6-3d70-405e-bc74-75b52a178e4a\") " pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.605178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.605288 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.605402 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwfch\" (UniqueName: \"kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.605656 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.605937 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.633877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwfch\" (UniqueName: \"kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch\") pod \"certified-operators-7xqqg\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.728238 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:02 crc kubenswrapper[4782]: I1124 13:00:02.831292 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.288002 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.403774 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerStarted","Data":"9d9f2285bd98ad6ae219560ba04170b99fd0c12bb2de7a358f2ab004d7f9bd4b"} Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.483811 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzkn2"] Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.895338 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.960980 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume\") pod \"036963d7-60ca-4f52-9a77-8b908593a999\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.961056 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m82ml\" (UniqueName: \"kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml\") pod \"036963d7-60ca-4f52-9a77-8b908593a999\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.961309 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume\") pod \"036963d7-60ca-4f52-9a77-8b908593a999\" (UID: \"036963d7-60ca-4f52-9a77-8b908593a999\") " Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.962890 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume" (OuterVolumeSpecName: "config-volume") pod "036963d7-60ca-4f52-9a77-8b908593a999" (UID: "036963d7-60ca-4f52-9a77-8b908593a999"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.968624 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml" (OuterVolumeSpecName: "kube-api-access-m82ml") pod "036963d7-60ca-4f52-9a77-8b908593a999" (UID: "036963d7-60ca-4f52-9a77-8b908593a999"). InnerVolumeSpecName "kube-api-access-m82ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4782]: I1124 13:00:03.984723 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "036963d7-60ca-4f52-9a77-8b908593a999" (UID: "036963d7-60ca-4f52-9a77-8b908593a999"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.063903 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036963d7-60ca-4f52-9a77-8b908593a999-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.063970 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036963d7-60ca-4f52-9a77-8b908593a999-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.063984 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m82ml\" (UniqueName: \"kubernetes.io/projected/036963d7-60ca-4f52-9a77-8b908593a999-kube-api-access-m82ml\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.414872 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerID="cd4a5c82fda954170985b4720400c01b197e14e780f8983f1b226076ff782cb3" exitCode=0 Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.414959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerDied","Data":"cd4a5c82fda954170985b4720400c01b197e14e780f8983f1b226076ff782cb3"} Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.417707 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.417718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-4htbf" event={"ID":"036963d7-60ca-4f52-9a77-8b908593a999","Type":"ContainerDied","Data":"9c795cd6d62a5b842f7ef1a053ca40f15cc46d6de9950ec6e0a23a7727309ac5"} Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.417754 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c795cd6d62a5b842f7ef1a053ca40f15cc46d6de9950ec6e0a23a7727309ac5" Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.417906 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.422927 4782 generic.go:334] "Generic (PLEG): container finished" podID="b4ca04a6-3d70-405e-bc74-75b52a178e4a" containerID="59e9e22b553833c56e102028a27c6809d476238716906a8cd119c4817e844e5b" exitCode=0 Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.423003 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzkn2" event={"ID":"b4ca04a6-3d70-405e-bc74-75b52a178e4a","Type":"ContainerDied","Data":"59e9e22b553833c56e102028a27c6809d476238716906a8cd119c4817e844e5b"} Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.423063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzkn2" event={"ID":"b4ca04a6-3d70-405e-bc74-75b52a178e4a","Type":"ContainerStarted","Data":"0ee46c12630564eef8768bf446bc0f3435e08cc3c2e51a205842319ee3f8881f"} Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.494596 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867"] Nov 24 13:00:04 crc kubenswrapper[4782]: I1124 13:00:04.503027 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-nj867"] Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.400633 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:05 crc kubenswrapper[4782]: E1124 13:00:05.405131 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036963d7-60ca-4f52-9a77-8b908593a999" containerName="collect-profiles" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.405165 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="036963d7-60ca-4f52-9a77-8b908593a999" containerName="collect-profiles" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.405482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="036963d7-60ca-4f52-9a77-8b908593a999" containerName="collect-profiles" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.407365 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.427168 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.440993 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerStarted","Data":"a07a216a27be4d1f235ad8958a7de7f07670b3f20668d88bd23bfc5211bc9b90"} Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.492183 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2tf\" (UniqueName: \"kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.492276 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.492440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.505931 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f6fefd-3f98-44db-b4cf-05debe821489" path="/var/lib/kubelet/pods/60f6fefd-3f98-44db-b4cf-05debe821489/volumes" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.593555 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2tf\" (UniqueName: \"kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.593624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.593715 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.594464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.594799 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.613005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2tf\" (UniqueName: \"kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf\") pod \"redhat-marketplace-m7ndv\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:05 crc kubenswrapper[4782]: I1124 13:00:05.736486 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:06 crc kubenswrapper[4782]: I1124 13:00:06.256738 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:06 crc kubenswrapper[4782]: W1124 13:00:06.265845 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1527d81d_2bce_4cbe_aa5e_d9472d790f2e.slice/crio-60a0edabf35498bc095af940481787c06d53b32f28dc272eddadb8ad5de8e1fe WatchSource:0}: Error finding container 60a0edabf35498bc095af940481787c06d53b32f28dc272eddadb8ad5de8e1fe: Status 404 returned error can't find the container with id 60a0edabf35498bc095af940481787c06d53b32f28dc272eddadb8ad5de8e1fe Nov 24 13:00:06 crc kubenswrapper[4782]: I1124 13:00:06.451058 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerStarted","Data":"60a0edabf35498bc095af940481787c06d53b32f28dc272eddadb8ad5de8e1fe"} Nov 24 13:00:07 crc kubenswrapper[4782]: I1124 13:00:07.466588 4782 generic.go:334] "Generic (PLEG): container finished" podID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerID="a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83" exitCode=0 Nov 24 13:00:07 crc kubenswrapper[4782]: I1124 13:00:07.467498 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerDied","Data":"a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83"} Nov 24 13:00:07 crc kubenswrapper[4782]: I1124 13:00:07.473350 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerID="a07a216a27be4d1f235ad8958a7de7f07670b3f20668d88bd23bfc5211bc9b90" exitCode=0 Nov 24 13:00:07 crc kubenswrapper[4782]: I1124 13:00:07.473402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerDied","Data":"a07a216a27be4d1f235ad8958a7de7f07670b3f20668d88bd23bfc5211bc9b90"} Nov 24 13:00:08 crc kubenswrapper[4782]: I1124 13:00:08.491774 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerStarted","Data":"dbab233214e27ba251fe937534fbc235dbe7bc6ecc0895a1a291723bf76187ed"} Nov 24 13:00:08 crc kubenswrapper[4782]: I1124 13:00:08.513806 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xqqg" podStartSLOduration=2.978734184 podStartE2EDuration="6.513790111s" podCreationTimestamp="2025-11-24 13:00:02 +0000 UTC" firstStartedPulling="2025-11-24 13:00:04.417627826 +0000 UTC m=+3853.661461595" lastFinishedPulling="2025-11-24 13:00:07.952683753 +0000 UTC m=+3857.196517522" observedRunningTime="2025-11-24 13:00:08.505260782 +0000 UTC m=+3857.749094551" watchObservedRunningTime="2025-11-24 13:00:08.513790111 +0000 UTC m=+3857.757623880" Nov 24 13:00:09 crc kubenswrapper[4782]: I1124 13:00:09.507049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerStarted","Data":"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63"} Nov 24 13:00:12 crc kubenswrapper[4782]: I1124 13:00:12.729520 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:12 crc kubenswrapper[4782]: I1124 13:00:12.730088 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:12 crc kubenswrapper[4782]: I1124 13:00:12.797929 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:13 crc kubenswrapper[4782]: I1124 13:00:13.619081 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:14 crc kubenswrapper[4782]: I1124 13:00:14.209074 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:14 crc kubenswrapper[4782]: I1124 13:00:14.565516 4782 generic.go:334] "Generic (PLEG): container finished" podID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerID="f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63" exitCode=0 Nov 24 13:00:14 crc kubenswrapper[4782]: I1124 13:00:14.565578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerDied","Data":"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63"} Nov 24 13:00:15 crc kubenswrapper[4782]: I1124 13:00:15.574825 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xqqg" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="registry-server" containerID="cri-o://dbab233214e27ba251fe937534fbc235dbe7bc6ecc0895a1a291723bf76187ed" gracePeriod=2 Nov 24 13:00:16 crc kubenswrapper[4782]: I1124 13:00:16.587801 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerID="dbab233214e27ba251fe937534fbc235dbe7bc6ecc0895a1a291723bf76187ed" exitCode=0 Nov 24 13:00:16 crc kubenswrapper[4782]: I1124 13:00:16.587851 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerDied","Data":"dbab233214e27ba251fe937534fbc235dbe7bc6ecc0895a1a291723bf76187ed"} Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.060693 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.246316 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities\") pod \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.246780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwfch\" (UniqueName: \"kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch\") pod \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.246818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content\") pod \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\" (UID: \"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327\") " Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.248040 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities" (OuterVolumeSpecName: "utilities") pod "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" (UID: "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.264231 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch" (OuterVolumeSpecName: "kube-api-access-wwfch") pod "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" (UID: "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327"). InnerVolumeSpecName "kube-api-access-wwfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.301364 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" (UID: "9ca16ce1-bebe-4cb3-8ce6-dce394b1e327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.348972 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwfch\" (UniqueName: \"kubernetes.io/projected/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-kube-api-access-wwfch\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.349838 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.349965 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.672345 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzkn2" event={"ID":"b4ca04a6-3d70-405e-bc74-75b52a178e4a","Type":"ContainerStarted","Data":"951736a54abcd934aa5c4fd6d2cdb78a9a765a7a4afb25489f02416caa844244"} Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.681315 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerStarted","Data":"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0"} Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.688620 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xqqg" event={"ID":"9ca16ce1-bebe-4cb3-8ce6-dce394b1e327","Type":"ContainerDied","Data":"9d9f2285bd98ad6ae219560ba04170b99fd0c12bb2de7a358f2ab004d7f9bd4b"} Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.688673 4782 scope.go:117] "RemoveContainer" containerID="dbab233214e27ba251fe937534fbc235dbe7bc6ecc0895a1a291723bf76187ed" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.688824 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xqqg" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.732615 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.741400 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xqqg"] Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.743431 4782 scope.go:117] "RemoveContainer" containerID="a07a216a27be4d1f235ad8958a7de7f07670b3f20668d88bd23bfc5211bc9b90" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.750422 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m7ndv" podStartSLOduration=3.321327089 podStartE2EDuration="16.750403707s" podCreationTimestamp="2025-11-24 13:00:05 +0000 UTC" firstStartedPulling="2025-11-24 13:00:07.468919236 +0000 UTC m=+3856.712753005" lastFinishedPulling="2025-11-24 13:00:20.897995854 +0000 UTC m=+3870.141829623" observedRunningTime="2025-11-24 13:00:21.749556384 +0000 UTC m=+3870.993390163" watchObservedRunningTime="2025-11-24 13:00:21.750403707 +0000 UTC m=+3870.994237486" Nov 24 13:00:21 crc kubenswrapper[4782]: I1124 13:00:21.800590 4782 scope.go:117] "RemoveContainer" containerID="cd4a5c82fda954170985b4720400c01b197e14e780f8983f1b226076ff782cb3" Nov 24 13:00:23 crc kubenswrapper[4782]: I1124 13:00:23.511845 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" path="/var/lib/kubelet/pods/9ca16ce1-bebe-4cb3-8ce6-dce394b1e327/volumes" Nov 24 13:00:25 crc kubenswrapper[4782]: I1124 13:00:25.736925 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:25 crc kubenswrapper[4782]: I1124 13:00:25.737458 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:26 crc kubenswrapper[4782]: I1124 13:00:26.738487 4782 generic.go:334] "Generic (PLEG): container finished" podID="b4ca04a6-3d70-405e-bc74-75b52a178e4a" containerID="951736a54abcd934aa5c4fd6d2cdb78a9a765a7a4afb25489f02416caa844244" exitCode=0 Nov 24 13:00:26 crc kubenswrapper[4782]: I1124 13:00:26.738576 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzkn2" event={"ID":"b4ca04a6-3d70-405e-bc74-75b52a178e4a","Type":"ContainerDied","Data":"951736a54abcd934aa5c4fd6d2cdb78a9a765a7a4afb25489f02416caa844244"} Nov 24 13:00:26 crc kubenswrapper[4782]: I1124 13:00:26.786805 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-m7ndv" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:26 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:26 crc kubenswrapper[4782]: > Nov 24 13:00:27 crc kubenswrapper[4782]: I1124 13:00:27.751895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzkn2" event={"ID":"b4ca04a6-3d70-405e-bc74-75b52a178e4a","Type":"ContainerStarted","Data":"f431d9925c91465aa5e0e4272ab7156809a69d70e37b10ecd5d75013c2fac353"} Nov 24 13:00:27 crc kubenswrapper[4782]: I1124 13:00:27.772211 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pzkn2" podStartSLOduration=2.874177789 podStartE2EDuration="25.772188923s" podCreationTimestamp="2025-11-24 13:00:02 +0000 UTC" firstStartedPulling="2025-11-24 13:00:04.424620134 +0000 UTC m=+3853.668453903" lastFinishedPulling="2025-11-24 13:00:27.322631268 +0000 UTC m=+3876.566465037" observedRunningTime="2025-11-24 13:00:27.768030371 +0000 UTC m=+3877.011864170" watchObservedRunningTime="2025-11-24 13:00:27.772188923 +0000 UTC m=+3877.016022692" Nov 24 13:00:32 crc kubenswrapper[4782]: I1124 13:00:32.832326 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:32 crc kubenswrapper[4782]: I1124 13:00:32.832939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:33 crc kubenswrapper[4782]: I1124 13:00:33.876814 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pzkn2" podUID="b4ca04a6-3d70-405e-bc74-75b52a178e4a" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:33 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:33 crc kubenswrapper[4782]: > Nov 24 13:00:35 crc kubenswrapper[4782]: I1124 13:00:35.798093 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:35 crc kubenswrapper[4782]: I1124 13:00:35.875002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:36 crc kubenswrapper[4782]: I1124 13:00:36.607113 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:36 crc kubenswrapper[4782]: I1124 13:00:36.845206 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m7ndv" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="registry-server" containerID="cri-o://db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0" gracePeriod=2 Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.363249 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.512893 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities\") pod \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.513340 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2tf\" (UniqueName: \"kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf\") pod \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.513443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content\") pod \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\" (UID: \"1527d81d-2bce-4cbe-aa5e-d9472d790f2e\") " Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.513955 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities" (OuterVolumeSpecName: "utilities") pod "1527d81d-2bce-4cbe-aa5e-d9472d790f2e" (UID: "1527d81d-2bce-4cbe-aa5e-d9472d790f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.519622 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf" (OuterVolumeSpecName: "kube-api-access-hv2tf") pod "1527d81d-2bce-4cbe-aa5e-d9472d790f2e" (UID: "1527d81d-2bce-4cbe-aa5e-d9472d790f2e"). InnerVolumeSpecName "kube-api-access-hv2tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.530379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1527d81d-2bce-4cbe-aa5e-d9472d790f2e" (UID: "1527d81d-2bce-4cbe-aa5e-d9472d790f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.615727 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv2tf\" (UniqueName: \"kubernetes.io/projected/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-kube-api-access-hv2tf\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.615750 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.615760 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1527d81d-2bce-4cbe-aa5e-d9472d790f2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.855261 4782 generic.go:334] "Generic (PLEG): container finished" podID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerID="db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0" exitCode=0 Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.855307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerDied","Data":"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0"} Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.855335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7ndv" event={"ID":"1527d81d-2bce-4cbe-aa5e-d9472d790f2e","Type":"ContainerDied","Data":"60a0edabf35498bc095af940481787c06d53b32f28dc272eddadb8ad5de8e1fe"} Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.855355 4782 scope.go:117] "RemoveContainer" containerID="db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.855345 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7ndv" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.887648 4782 scope.go:117] "RemoveContainer" containerID="f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.892648 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.914179 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7ndv"] Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.916620 4782 scope.go:117] "RemoveContainer" containerID="a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.956195 4782 scope.go:117] "RemoveContainer" containerID="db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0" Nov 24 13:00:37 crc kubenswrapper[4782]: E1124 13:00:37.956598 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0\": container with ID starting with db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0 not found: ID does not exist" containerID="db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.956654 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0"} err="failed to get container status \"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0\": rpc error: code = NotFound desc = could not find container \"db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0\": container with ID starting with db9fd7718cbbdbc776dbb83719482b52527f7f65dcd8652dee32931573753bd0 not found: ID does not exist" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.956685 4782 scope.go:117] "RemoveContainer" containerID="f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63" Nov 24 13:00:37 crc kubenswrapper[4782]: E1124 13:00:37.957099 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63\": container with ID starting with f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63 not found: ID does not exist" containerID="f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.957131 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63"} err="failed to get container status \"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63\": rpc error: code = NotFound desc = could not find container \"f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63\": container with ID starting with f9a8d96c6e8e2a484a321394ddfb2fbf68e6c773759dc908a38cf6024708bd63 not found: ID does not exist" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.957148 4782 scope.go:117] "RemoveContainer" containerID="a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83" Nov 24 13:00:37 crc kubenswrapper[4782]: E1124 13:00:37.957469 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83\": container with ID starting with a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83 not found: ID does not exist" containerID="a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83" Nov 24 13:00:37 crc kubenswrapper[4782]: I1124 13:00:37.957492 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83"} err="failed to get container status \"a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83\": rpc error: code = NotFound desc = could not find container \"a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83\": container with ID starting with a3fd36345a35bec285eccd6257e026ad52d6234dc4ffce72338e1ad388f7ba83 not found: ID does not exist" Nov 24 13:00:39 crc kubenswrapper[4782]: I1124 13:00:39.515264 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" path="/var/lib/kubelet/pods/1527d81d-2bce-4cbe-aa5e-d9472d790f2e/volumes" Nov 24 13:00:42 crc kubenswrapper[4782]: I1124 13:00:42.896315 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:42 crc kubenswrapper[4782]: I1124 13:00:42.953440 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pzkn2" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.016473 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzkn2"] Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.135761 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.136045 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vswm" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="registry-server" containerID="cri-o://360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13" gracePeriod=2 Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.664249 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.727381 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7njg\" (UniqueName: \"kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg\") pod \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.727431 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities\") pod \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.727459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content\") pod \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\" (UID: \"73cb42af-6271-49a9-8bc3-eb50ef39a50d\") " Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.728593 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities" (OuterVolumeSpecName: "utilities") pod "73cb42af-6271-49a9-8bc3-eb50ef39a50d" (UID: "73cb42af-6271-49a9-8bc3-eb50ef39a50d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.734683 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg" (OuterVolumeSpecName: "kube-api-access-g7njg") pod "73cb42af-6271-49a9-8bc3-eb50ef39a50d" (UID: "73cb42af-6271-49a9-8bc3-eb50ef39a50d"). InnerVolumeSpecName "kube-api-access-g7njg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.823110 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73cb42af-6271-49a9-8bc3-eb50ef39a50d" (UID: "73cb42af-6271-49a9-8bc3-eb50ef39a50d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.829889 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7njg\" (UniqueName: \"kubernetes.io/projected/73cb42af-6271-49a9-8bc3-eb50ef39a50d-kube-api-access-g7njg\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.829919 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.829929 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73cb42af-6271-49a9-8bc3-eb50ef39a50d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.921594 4782 generic.go:334] "Generic (PLEG): container finished" podID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerID="360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13" exitCode=0 Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.922524 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vswm" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.923279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerDied","Data":"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13"} Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.923326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vswm" event={"ID":"73cb42af-6271-49a9-8bc3-eb50ef39a50d","Type":"ContainerDied","Data":"ae4a71e9d0b2d589b204b0c4567f475b5b302819933bbdb86213eb3398d6cb45"} Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.923344 4782 scope.go:117] "RemoveContainer" containerID="360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.956281 4782 scope.go:117] "RemoveContainer" containerID="2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.988633 4782 scope.go:117] "RemoveContainer" containerID="5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a" Nov 24 13:00:43 crc kubenswrapper[4782]: I1124 13:00:43.992249 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.000491 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vswm"] Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.028767 4782 scope.go:117] "RemoveContainer" containerID="360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13" Nov 24 13:00:44 crc kubenswrapper[4782]: E1124 13:00:44.029146 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13\": container with ID starting with 360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13 not found: ID does not exist" containerID="360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13" Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.029176 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13"} err="failed to get container status \"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13\": rpc error: code = NotFound desc = could not find container \"360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13\": container with ID starting with 360699f74fcbe37314032019e90552ed882d3ab7ec7f5b0de42a1c002cbf9b13 not found: ID does not exist" Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.029196 4782 scope.go:117] "RemoveContainer" containerID="2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832" Nov 24 13:00:44 crc kubenswrapper[4782]: E1124 13:00:44.030551 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832\": container with ID starting with 2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832 not found: ID does not exist" containerID="2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832" Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.030658 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832"} err="failed to get container status \"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832\": rpc error: code = NotFound desc = could not find container \"2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832\": container with ID starting with 2c7108f0ce5a36caf4766f19c607e746d17b18e4a847a8f2e70fd6900aff1832 not found: ID does not exist" Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.030743 4782 scope.go:117] "RemoveContainer" containerID="5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a" Nov 24 13:00:44 crc kubenswrapper[4782]: E1124 13:00:44.031183 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a\": container with ID starting with 5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a not found: ID does not exist" containerID="5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a" Nov 24 13:00:44 crc kubenswrapper[4782]: I1124 13:00:44.031226 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a"} err="failed to get container status \"5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a\": rpc error: code = NotFound desc = could not find container \"5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a\": container with ID starting with 5f7f357e1402ac1563386513c425adb6be703b406ff3d66022c670788e3ebd5a not found: ID does not exist" Nov 24 13:00:45 crc kubenswrapper[4782]: I1124 13:00:45.505345 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" path="/var/lib/kubelet/pods/73cb42af-6271-49a9-8bc3-eb50ef39a50d/volumes" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.146138 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399821-fkxdm"] Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147132 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147146 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147159 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147166 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147180 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147187 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147201 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147207 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147219 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147225 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147235 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147241 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147254 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147260 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147269 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147274 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: E1124 13:01:00.147288 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147294 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147508 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1527d81d-2bce-4cbe-aa5e-d9472d790f2e" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147520 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca16ce1-bebe-4cb3-8ce6-dce394b1e327" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.147533 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cb42af-6271-49a9-8bc3-eb50ef39a50d" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.148081 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.172055 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-fkxdm"] Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.279309 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.279625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.279852 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r84n\" (UniqueName: \"kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.280032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.381857 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r84n\" (UniqueName: \"kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.382004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.382073 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.382243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.388420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.388978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.389011 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.400681 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r84n\" (UniqueName: \"kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n\") pod \"keystone-cron-29399821-fkxdm\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.464910 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:00 crc kubenswrapper[4782]: I1124 13:01:00.920238 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-fkxdm"] Nov 24 13:01:00 crc kubenswrapper[4782]: W1124 13:01:00.934240 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050b4c42_cdc5_4e8d_b3f0_2894bcf34395.slice/crio-827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95 WatchSource:0}: Error finding container 827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95: Status 404 returned error can't find the container with id 827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95 Nov 24 13:01:01 crc kubenswrapper[4782]: I1124 13:01:01.113960 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-fkxdm" event={"ID":"050b4c42-cdc5-4e8d-b3f0-2894bcf34395","Type":"ContainerStarted","Data":"827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95"} Nov 24 13:01:02 crc kubenswrapper[4782]: I1124 13:01:02.126604 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-fkxdm" event={"ID":"050b4c42-cdc5-4e8d-b3f0-2894bcf34395","Type":"ContainerStarted","Data":"e0422b0fb097914a6a9524022684a27bd2194702d72d511fe4b09b1d92e8d420"} Nov 24 13:01:02 crc kubenswrapper[4782]: I1124 13:01:02.146209 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399821-fkxdm" podStartSLOduration=2.146175735 podStartE2EDuration="2.146175735s" podCreationTimestamp="2025-11-24 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:01:02.142895437 +0000 UTC m=+3911.386729206" watchObservedRunningTime="2025-11-24 13:01:02.146175735 +0000 UTC m=+3911.390009494" Nov 24 13:01:05 crc kubenswrapper[4782]: I1124 13:01:05.154318 4782 generic.go:334] "Generic (PLEG): container finished" podID="050b4c42-cdc5-4e8d-b3f0-2894bcf34395" containerID="e0422b0fb097914a6a9524022684a27bd2194702d72d511fe4b09b1d92e8d420" exitCode=0 Nov 24 13:01:05 crc kubenswrapper[4782]: I1124 13:01:05.154925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-fkxdm" event={"ID":"050b4c42-cdc5-4e8d-b3f0-2894bcf34395","Type":"ContainerDied","Data":"e0422b0fb097914a6a9524022684a27bd2194702d72d511fe4b09b1d92e8d420"} Nov 24 13:01:05 crc kubenswrapper[4782]: I1124 13:01:05.499491 4782 scope.go:117] "RemoveContainer" containerID="92454e80fcb8515f07b349d7e0d6d1182992bc2066f89e1b4bf81f9a23c4ef86" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:06.560940 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:06.721853 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r84n\" (UniqueName: \"kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n\") pod \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:06.722035 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle\") pod \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:06.722061 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys\") pod \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:06.722101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data\") pod \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\" (UID: \"050b4c42-cdc5-4e8d-b3f0-2894bcf34395\") " Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.092471 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "050b4c42-cdc5-4e8d-b3f0-2894bcf34395" (UID: "050b4c42-cdc5-4e8d-b3f0-2894bcf34395"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.093936 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n" (OuterVolumeSpecName: "kube-api-access-2r84n") pod "050b4c42-cdc5-4e8d-b3f0-2894bcf34395" (UID: "050b4c42-cdc5-4e8d-b3f0-2894bcf34395"). InnerVolumeSpecName "kube-api-access-2r84n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.129739 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.129787 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r84n\" (UniqueName: \"kubernetes.io/projected/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-kube-api-access-2r84n\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.176003 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-fkxdm" event={"ID":"050b4c42-cdc5-4e8d-b3f0-2894bcf34395","Type":"ContainerDied","Data":"827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95"} Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.176067 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827f6ff42799b586787448a87f6e0b93c560bccb2329380cc6d6d296574c3b95" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.176153 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-fkxdm" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.232236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050b4c42-cdc5-4e8d-b3f0-2894bcf34395" (UID: "050b4c42-cdc5-4e8d-b3f0-2894bcf34395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.258259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data" (OuterVolumeSpecName: "config-data") pod "050b4c42-cdc5-4e8d-b3f0-2894bcf34395" (UID: "050b4c42-cdc5-4e8d-b3f0-2894bcf34395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.333894 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4782]: I1124 13:01:07.334331 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050b4c42-cdc5-4e8d-b3f0-2894bcf34395-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:51 crc kubenswrapper[4782]: I1124 13:01:51.590706 4782 generic.go:334] "Generic (PLEG): container finished" podID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerID="6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6" exitCode=0 Nov 24 13:01:51 crc kubenswrapper[4782]: I1124 13:01:51.590717 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" event={"ID":"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd","Type":"ContainerDied","Data":"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6"} Nov 24 13:01:51 crc kubenswrapper[4782]: I1124 13:01:51.592278 4782 scope.go:117] "RemoveContainer" containerID="6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6" Nov 24 13:01:51 crc kubenswrapper[4782]: I1124 13:01:51.945921 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zb8ql_must-gather-kgfh9_d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd/gather/0.log" Nov 24 13:02:00 crc kubenswrapper[4782]: I1124 13:02:00.411395 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:02:00 crc kubenswrapper[4782]: I1124 13:02:00.411970 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:02:03 crc kubenswrapper[4782]: I1124 13:02:03.638177 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zb8ql/must-gather-kgfh9"] Nov 24 13:02:03 crc kubenswrapper[4782]: I1124 13:02:03.638948 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="copy" containerID="cri-o://b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968" gracePeriod=2 Nov 24 13:02:03 crc kubenswrapper[4782]: I1124 13:02:03.646666 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zb8ql/must-gather-kgfh9"] Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.154902 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zb8ql_must-gather-kgfh9_d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd/copy/0.log" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.155571 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.317911 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz5wk\" (UniqueName: \"kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk\") pod \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.318083 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output\") pod \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\" (UID: \"d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd\") " Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.323686 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk" (OuterVolumeSpecName: "kube-api-access-dz5wk") pod "d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" (UID: "d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd"). InnerVolumeSpecName "kube-api-access-dz5wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.428299 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz5wk\" (UniqueName: \"kubernetes.io/projected/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-kube-api-access-dz5wk\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.469860 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" (UID: "d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.529725 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.770709 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zb8ql_must-gather-kgfh9_d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd/copy/0.log" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.774556 4782 generic.go:334] "Generic (PLEG): container finished" podID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerID="b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968" exitCode=143 Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.774619 4782 scope.go:117] "RemoveContainer" containerID="b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.774788 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zb8ql/must-gather-kgfh9" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.853890 4782 scope.go:117] "RemoveContainer" containerID="6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.893690 4782 scope.go:117] "RemoveContainer" containerID="b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968" Nov 24 13:02:04 crc kubenswrapper[4782]: E1124 13:02:04.894277 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968\": container with ID starting with b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968 not found: ID does not exist" containerID="b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.894307 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968"} err="failed to get container status \"b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968\": rpc error: code = NotFound desc = could not find container \"b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968\": container with ID starting with b1a96eecdb3511b72c7f534d07e2595e441e6b03227c81da3e7cb5b89f4f1968 not found: ID does not exist" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.894343 4782 scope.go:117] "RemoveContainer" containerID="6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6" Nov 24 13:02:04 crc kubenswrapper[4782]: E1124 13:02:04.894630 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6\": container with ID starting with 6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6 not found: ID does not exist" containerID="6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6" Nov 24 13:02:04 crc kubenswrapper[4782]: I1124 13:02:04.894712 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6"} err="failed to get container status \"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6\": rpc error: code = NotFound desc = could not find container \"6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6\": container with ID starting with 6677128d08b0e8ce54ed69980a149afda3b729ab3ac23065594cec04a4022de6 not found: ID does not exist" Nov 24 13:02:05 crc kubenswrapper[4782]: I1124 13:02:05.503211 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" path="/var/lib/kubelet/pods/d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd/volumes" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.411141 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.412798 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.865674 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:30 crc kubenswrapper[4782]: E1124 13:02:30.866913 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="gather" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.867073 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="gather" Nov 24 13:02:30 crc kubenswrapper[4782]: E1124 13:02:30.867201 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="copy" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.867308 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="copy" Nov 24 13:02:30 crc kubenswrapper[4782]: E1124 13:02:30.867484 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050b4c42-cdc5-4e8d-b3f0-2894bcf34395" containerName="keystone-cron" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.867608 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="050b4c42-cdc5-4e8d-b3f0-2894bcf34395" containerName="keystone-cron" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.868048 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="gather" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.868166 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="050b4c42-cdc5-4e8d-b3f0-2894bcf34395" containerName="keystone-cron" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.868247 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5181e71-0a6a-4cf8-8469-1cdfacdbf4bd" containerName="copy" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.869981 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:30 crc kubenswrapper[4782]: I1124 13:02:30.951856 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.018948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.019185 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.019502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwpb\" (UniqueName: \"kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.120925 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.120999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwpb\" (UniqueName: \"kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.121042 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.121484 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.121559 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.143329 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwpb\" (UniqueName: \"kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb\") pod \"community-operators-htlk7\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.195955 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:31 crc kubenswrapper[4782]: I1124 13:02:31.747206 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:32 crc kubenswrapper[4782]: I1124 13:02:32.020340 4782 generic.go:334] "Generic (PLEG): container finished" podID="a68ead6a-0653-4178-b767-cdfd9216c2ba" containerID="35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b" exitCode=0 Nov 24 13:02:32 crc kubenswrapper[4782]: I1124 13:02:32.020432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerDied","Data":"35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b"} Nov 24 13:02:32 crc kubenswrapper[4782]: I1124 13:02:32.021032 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerStarted","Data":"a640d810aad02c2a83434c79cc6b8296586b5c5ba65bcf8e304f3129829760d8"} Nov 24 13:02:33 crc kubenswrapper[4782]: I1124 13:02:33.033157 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerStarted","Data":"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e"} Nov 24 13:02:35 crc kubenswrapper[4782]: I1124 13:02:35.062711 4782 generic.go:334] "Generic (PLEG): container finished" podID="a68ead6a-0653-4178-b767-cdfd9216c2ba" containerID="c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e" exitCode=0 Nov 24 13:02:35 crc kubenswrapper[4782]: I1124 13:02:35.062817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerDied","Data":"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e"} Nov 24 13:02:36 crc kubenswrapper[4782]: I1124 13:02:36.075583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerStarted","Data":"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d"} Nov 24 13:02:36 crc kubenswrapper[4782]: I1124 13:02:36.096747 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htlk7" podStartSLOduration=2.627100842 podStartE2EDuration="6.096721717s" podCreationTimestamp="2025-11-24 13:02:30 +0000 UTC" firstStartedPulling="2025-11-24 13:02:32.022537813 +0000 UTC m=+4001.266371582" lastFinishedPulling="2025-11-24 13:02:35.492158688 +0000 UTC m=+4004.735992457" observedRunningTime="2025-11-24 13:02:36.089981206 +0000 UTC m=+4005.333814985" watchObservedRunningTime="2025-11-24 13:02:36.096721717 +0000 UTC m=+4005.340555486" Nov 24 13:02:41 crc kubenswrapper[4782]: I1124 13:02:41.196247 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:41 crc kubenswrapper[4782]: I1124 13:02:41.196899 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:41 crc kubenswrapper[4782]: I1124 13:02:41.241465 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:42 crc kubenswrapper[4782]: I1124 13:02:42.183511 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:42 crc kubenswrapper[4782]: I1124 13:02:42.239191 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.145576 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-htlk7" podUID="a68ead6a-0653-4178-b767-cdfd9216c2ba" containerName="registry-server" containerID="cri-o://313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d" gracePeriod=2 Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.615057 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.788814 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content\") pod \"a68ead6a-0653-4178-b767-cdfd9216c2ba\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.788950 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities\") pod \"a68ead6a-0653-4178-b767-cdfd9216c2ba\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.789105 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwpb\" (UniqueName: \"kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb\") pod \"a68ead6a-0653-4178-b767-cdfd9216c2ba\" (UID: \"a68ead6a-0653-4178-b767-cdfd9216c2ba\") " Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.790505 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities" (OuterVolumeSpecName: "utilities") pod "a68ead6a-0653-4178-b767-cdfd9216c2ba" (UID: "a68ead6a-0653-4178-b767-cdfd9216c2ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.797594 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb" (OuterVolumeSpecName: "kube-api-access-scwpb") pod "a68ead6a-0653-4178-b767-cdfd9216c2ba" (UID: "a68ead6a-0653-4178-b767-cdfd9216c2ba"). InnerVolumeSpecName "kube-api-access-scwpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.850886 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a68ead6a-0653-4178-b767-cdfd9216c2ba" (UID: "a68ead6a-0653-4178-b767-cdfd9216c2ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.891338 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.891387 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68ead6a-0653-4178-b767-cdfd9216c2ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:44 crc kubenswrapper[4782]: I1124 13:02:44.891397 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwpb\" (UniqueName: \"kubernetes.io/projected/a68ead6a-0653-4178-b767-cdfd9216c2ba-kube-api-access-scwpb\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.156611 4782 generic.go:334] "Generic (PLEG): container finished" podID="a68ead6a-0653-4178-b767-cdfd9216c2ba" containerID="313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d" exitCode=0 Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.156785 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerDied","Data":"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d"} Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.157481 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htlk7" event={"ID":"a68ead6a-0653-4178-b767-cdfd9216c2ba","Type":"ContainerDied","Data":"a640d810aad02c2a83434c79cc6b8296586b5c5ba65bcf8e304f3129829760d8"} Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.157517 4782 scope.go:117] "RemoveContainer" containerID="313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.156888 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htlk7" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.177597 4782 scope.go:117] "RemoveContainer" containerID="c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.201228 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.212159 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-htlk7"] Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.224641 4782 scope.go:117] "RemoveContainer" containerID="35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.252268 4782 scope.go:117] "RemoveContainer" containerID="313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d" Nov 24 13:02:45 crc kubenswrapper[4782]: E1124 13:02:45.252914 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d\": container with ID starting with 313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d not found: ID does not exist" containerID="313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.252996 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d"} err="failed to get container status \"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d\": rpc error: code = NotFound desc = could not find container \"313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d\": container with ID starting with 313de2846b8b16d379089560519f1e88155280b94b48dc0e856819b19f1a308d not found: ID does not exist" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.253042 4782 scope.go:117] "RemoveContainer" containerID="c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e" Nov 24 13:02:45 crc kubenswrapper[4782]: E1124 13:02:45.253708 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e\": container with ID starting with c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e not found: ID does not exist" containerID="c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.253761 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e"} err="failed to get container status \"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e\": rpc error: code = NotFound desc = could not find container \"c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e\": container with ID starting with c6d7247402a365f528b52070827c7d843b5c5bbb0b4ba8f936e8b91ff841544e not found: ID does not exist" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.253798 4782 scope.go:117] "RemoveContainer" containerID="35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b" Nov 24 13:02:45 crc kubenswrapper[4782]: E1124 13:02:45.254175 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b\": container with ID starting with 35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b not found: ID does not exist" containerID="35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.254228 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b"} err="failed to get container status \"35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b\": rpc error: code = NotFound desc = could not find container \"35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b\": container with ID starting with 35026273874bdaef6b8f3cae2a2309addb7de78e92de8dd30d47e73fcc4b398b not found: ID does not exist" Nov 24 13:02:45 crc kubenswrapper[4782]: I1124 13:02:45.503514 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a68ead6a-0653-4178-b767-cdfd9216c2ba" path="/var/lib/kubelet/pods/a68ead6a-0653-4178-b767-cdfd9216c2ba/volumes" Nov 24 13:03:00 crc kubenswrapper[4782]: I1124 13:03:00.411121 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:03:00 crc kubenswrapper[4782]: I1124 13:03:00.411749 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:03:00 crc kubenswrapper[4782]: I1124 13:03:00.411809 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" Nov 24 13:03:00 crc kubenswrapper[4782]: I1124 13:03:00.412679 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"522ed81049ad1ce759e380346fc7b8a535ef31cac1400a6ff429b0809e977ad3"} pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:03:00 crc kubenswrapper[4782]: I1124 13:03:00.412757 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" containerID="cri-o://522ed81049ad1ce759e380346fc7b8a535ef31cac1400a6ff429b0809e977ad3" gracePeriod=600 Nov 24 13:03:01 crc kubenswrapper[4782]: I1124 13:03:01.523075 4782 generic.go:334] "Generic (PLEG): container finished" podID="078c4346-9841-4870-a8b8-de6911b24498" containerID="522ed81049ad1ce759e380346fc7b8a535ef31cac1400a6ff429b0809e977ad3" exitCode=0 Nov 24 13:03:01 crc kubenswrapper[4782]: I1124 13:03:01.523140 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerDied","Data":"522ed81049ad1ce759e380346fc7b8a535ef31cac1400a6ff429b0809e977ad3"} Nov 24 13:03:01 crc kubenswrapper[4782]: I1124 13:03:01.523679 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" event={"ID":"078c4346-9841-4870-a8b8-de6911b24498","Type":"ContainerStarted","Data":"4f5dd05f11981060e9958e8e863b5df295bc90af898c577657c37ba9957b971c"} Nov 24 13:03:01 crc kubenswrapper[4782]: I1124 13:03:01.523703 4782 scope.go:117] "RemoveContainer" containerID="b097343fb64821b2d5ccba6b91c24794438f78df27a68e364c6dacc4a90f593c" Nov 24 13:03:05 crc kubenswrapper[4782]: I1124 13:03:05.671881 4782 scope.go:117] "RemoveContainer" containerID="13daba3927b20e0516ec7933234ebf7d42a49e6db0c1a13bd98dbd1c8c0cab96" Nov 24 13:05:00 crc kubenswrapper[4782]: I1124 13:05:00.410485 4782 patch_prober.go:28] interesting pod/machine-config-daemon-xg6cl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:05:00 crc kubenswrapper[4782]: I1124 13:05:00.411033 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xg6cl" podUID="078c4346-9841-4870-a8b8-de6911b24498" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"